• bitcoinBitcoin$95,746.07-1.31%
  • ethereumEthereum$3,297.54-1.43%
  • rippleXRP$2.230.40%
  • binancecoinBNB$654.18-0.14%
  • solanaSolana$180.810.31%

ChatGPT Faces Another Charge Over ‘Delusion’ Issue

ChatGPT

OpenAI is facing yet another privacy complaint in the European Union by privacy rights group noyb on the failure of its AI chatbot ChatGPT to rectify disinformation about individuals.

The tendency of GenAI technologies to generate incorrect information has been well documented. However, this puts the technology in direct conflict with the EU’s General Data Protection Regulation (GDPR), which controls how regional users’ personal data can be treated. 

Failure to comply with GDPR can result in penalties of up to 4% of global annual turnover. Instead, more significantly, for a resource-rich behemoth like OpenAI, data protection regulators can compel modifications to how information is processed, so GDPR enforcement may transform how generative AI tools can work in the EU. 

OpenAI was previously compelled to make specific improvements following an early intervention by Italy’s data protection regulator, temporarily stopping ChatGPT in 2023. 

Now, noyb is filing the latest GDPR complaint against ChatGPT with the Austrian data protection authorities on behalf of an unknown complainant (defined as a “public figure”) who discovered that the AI chatbot supplied an inaccurate birth date for them. 

People in the EU have several rights under the GDPR regarding information on them, including the ability to correct incorrect data. noyb claims that OpenAI needs to meet this responsibility regarding its chatbot output. It stated that the corporation declined to change the complainant’s inaccurate birth date, claiming it was technically unable to do so. 

Instead, it offered to filter or block the material based on specific requests, such as the complainant’s name. 

According to OpenAI’s privacy policy, users who realize the AI chatbot has created “factually inaccurate information about you” can make a “correction request” via privacy.openai.com or email [email protected]. However, it qualifies the statement: “Given the technical complexity of how our models work, we may not be able to correct the inaccuracy in every instance.” 

In that circumstance, OpenAI proposes that users fill out an online form to request that their personal information be removed from ChatGPT’s output. 

The challenge for the AI behemoth is that GDPR rights are not a la carte. People in Europe have the right to request rectification and to seek the deletion of their data. However, as Noyb points out, OpenAI still needs to determine which rights are available. 

Other aspects of the complaint address GDPR transparency concerns, with Noyb claiming that OpenAI cannot disclose where the data it generates on individuals comes from or what data the chatbot maintains about people. 

This is significant because, once again, the rule allows individuals to request such information by submitting a so-called subject access request (SAR). According to noyb, OpenAI needed to sufficiently reply to the complainant’s SAR by failing to give any information regarding the data handled, its origins, or its recipients. 

Maartje de Graaf, data protection counsel at noyb, commented on the complaint: “Making up incorrect information is problematic in and of itself. However, spreading inaccurate information about someone can have catastrophic implications. Companies can now not ensure that chatbots such as ChatGPT comply with EU rules when processing personal data. A system that cannot offer accurate and transparent results cannot be used to generate individual data. The technology must follow the legal requirements, not the other way around. 

The corporation has asked the Austrian DPA to investigate the complaint over OpenAI’s data processing and to levy a fee to guarantee future compliance. However, it emphasized that the case is “likely” to be resolved through EU cooperation. 

OpenAI is facing a very similar complaint in Poland. Last September, the local data protection authorities launched an investigation into ChatGPT in response to a complaint from a privacy and security researcher, who discovered that OpenAI could not update wrong information about him. The complaint also accuses the AI powerhouse of failing to meet the regulation’s transparency standards. 

Meanwhile, the Italian data protection regulator continues to investigate ChatGPT. In January, it issued a draft ruling stating that it believes OpenAI violated the GDPR in various ways, including the chatbot’s inclination to generate falsehoods about people. The conclusions also address other critical problems, such as the legality of processing. 

The Italian authorities allowed OpenAI one month to reply to its conclusions. A final decision is still awaited. 

With another GDPR complaint filed against its chatbot, the possibility of OpenAI facing a slew of GDPR enforcements across many Member States has increased. 

Last fall, the company opened a regional office in Dublin, a move that appears to be intended to reduce its regulatory risk by routing privacy complaints through Ireland’s Data Protection Commission. This is thanks to a mechanism in the GDPR that is intended to streamline oversight of cross-border complaints by routing them to a single member state authority where the company is “main established.”

Previous Article

Gemini Launches Superlunar Initiative to Explore Web3's Future

Next Article

RPA Adapts to AI Agents for Future Relevance

Write a Comment

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.