Subscribe for notification
AI

Google Faces Privacy Concerns for GenAI


There is an investigation into whether or not Google has followed the EU’s data protection rules when using people’s data to train generative AI

The lead privacy regulator in the EU has started the investigation.

It’s checking to see if the tech giant had to do a data protection impact assessment (DPIA) to consider the possible threats its AI technologies could pose to the rights and freedoms of people whose data was used to train the models.

Generative AI tools are notorious for making up lies that sound like they could be true. This habit and the fact that they can give out personal information at any time puts the people who make them at a lot of legal risk. The General Data Protection Regulation (GDPR) says that Google must follow the rules set by the EU. Ireland’s Data Protection Commission (DPC) can fine Google up to 4% of its parent company Alphabet’s global yearly turnover for any proven breaches.

Google has created several generative AI tools, such as the Gemini (formerly Bard) line of general-purpose large language models (LLMs). It powers AI apps and uses the technology to improve web search, among other things. Google launched an LLM called PaLM2 at its I/O developer conference last year. It is what these AI tools for consumers are based on.

The Irish Data Protection Commission (DPC) says it is looking into how Google created this basic AI model. Section 110 of Ireland’s Data Protection Act 2018 made the GDPR national law.

GenAI models usually need a lot of data to be trained. The types of data that LLM makers have gathered and how and where they got it are being looked at more closely in light of various legal issues, such as privacy and copyright.

In the second case, data used to train AI, including the personal information of EU citizens, is protected by the bloc’s data protection rules, even if it was taken directly from users or scraped from the internet. Because of this, some LLMs have already been asked about their data practices, and some have been fined under GDPR. These companies include OpenAI, which makes GPT and ChatGPT, and Meta, which makes the Llama AI model.

People have also complained to GDPR and the DPC about how Elon Musk-owned X uses people’s data for AI training. This led to a court case and an agreement by X to limit how much data it processes, but no punishment. Even so, X could still be fined under GDPR if the DPC finds that the way it used user data to train its AI tool, Grok, violated the rules.

Elon Musk | source, NPR

Google’s GenAI is the latest thing the government has done in this area: the DPC’s DPIA investigation.

A press release from the DPC said, “The statutory inquiry is about whether Google had met any obligations that it may have had to do an assessment, under Article 35 of the General Data Protection Regulation (Data Protection Impact Assessment), before it processed the personal data of EU/EEA data subjects in connection with the development of its foundational AI Model, Pathways Language Model 2 (PaLM 2).”

It says that a DPIA can be “crucial in ensuring that the fundamental rights and freedoms of individuals are adequately considered and protected when the processing of personal data is likely to result in a high risk.”

The DPC and its EU/EEA peer regulators are working to ensure that the personal data of EU/EEA data subjects is processed fairly when AI models and systems are being developed. This official inquiry is part of their more extensive efforts to do this. The DPC also mentioned that the EU’s network of GDPR enforcers is still trying to agree on the best way to apply the privacy law to GenAI tools.

James Emmanuel

James is a Computer Science student with a robust foundation in tech and a skilled DevOps engineer. His technical expertise extends to his role as a news reporter at Protechbro, where he specializes in crafting well-informed, technical content that highlights the latest trends and innovations in technology.

Disqus Comments Loading...

Recent Posts

Hackers Breach Tate’s Online ‘University,’ Steal Data

Hackers have infiltrated an online course that was established by Andrew Tate, a self-described misogynist and purported influencer The compromise…

27 minutes ago

Apple Builds Conversational Siri with LLMs

Apple is reportedly working on an enhanced version of Siri, incorporating large language models (LLMs) to create a more conversational…

35 minutes ago

YouTube Shorts Unveils AI Video Backgrounds

Thursday was the day that YouTube announced that its Dream Screen feature for Shorts now allows users to construct movie…

45 minutes ago

Marissa Mayer Pitches Ad-backed AI Chatbot Model

Marissa Mayer proposes a business model for AI chatbots funded by advertising, highlighting potential opportunities for monetization Marissa Mayer possesses…

54 minutes ago

Palo Alto Networks Warns of Firewall Breaches

By exploiting two new zero-day vulnerabilities discovered in widely used software developed by cybersecurity behemoth Palo Alto Networks, malicious hackers…

1 hour ago

Wiz Buys Dazz for $450M

Wiz, a cybersecurity company that has garnered significant attention, is acquiring a substantial amount of cloud security technology in order…

1 hour ago