AI

Google to Increase Censorship in Call-scanning AI

Google to increase censorship in their call-scanning AI so as to help detect and fight against scam calls but it also raises concerns.

The feature that Google showcased at its I/O conference yesterday—utilizing generative AI technology to analyze voice calls for conversational patterns linked to financial scams—has caused concern among privacy and security professionals, who caution that it is only the beginning. It is warned that integrating client-side surveillance into mobile infrastructure may potentially lead to establishing centralized censorship.

Google’s demonstration of the call scam-detection feature, which the tech giant claims will be integrated into a forthcoming iteration of its Android OS (approximately three-quarters of the world’s smartphones), is powered by Gemini Nano, the most miniature AI model in its current generation designed to operate exclusively on the device.

This essentially amounts to client-side scanning, an emerging technology that has sparked considerable controversy in recent times due to its potential to identify child sexual abuse material (CSAM) or grooming activities on messaging platforms.

Apple canceled its 2021 implementation of client-side scanning for CSAM in response to a massive privacy backlash. Nevertheless, policymakers have maintained pressure on the technology sector to develop methods for identifying illicit activities on their platforms. Consequently, any efforts by industry to build infrastructure for on-device scanning could pave the way for default content scanning of any kind, whether driven by the government or a commercial agenda.

In response to Google’s call-scanning demonstration, Meredith Whittaker, president of the encrypted messaging app Signal, based in the United States, cautioned in a post on X: “This is extremely hazardous. It paves the way for device-level, centralized client-side scanning.

“Very quickly, from identifying fraudulent activities, we can identify patterns frequently linked with seeking reproductive healthcare, offering LGBTQ resources, or tech worker whistleblowing.”

Google to Increase Censorship in Call-scanning AI

Professor of cryptography at Johns Hopkins, Matthew Green, also utilized X to sound the alarm. “In the future, AI models will detect and report illicit behavior by deducing meaning from your voice and text messages,” he cautioned. “For service providers to process your data, you must provide an attachment containing a zero-knowledge verification that scanning was performed.” “Open clients will be blocked as a result.”

According to Green, this dystopian future where censorship is implemented by default is technically feasible within the next few years. “It will only be a few years before this technology is truly efficient enough to be implemented. “No more than ten years,” he predicted.

Additionally, European security and privacy experts were eager to object.

Independent researcher and consultant for privacy and security issues Lukasz Olejnik, based in Poland, praised the company’s anti-scam functionality in response to Google’s X demonstration. However, he cautioned that the infrastructure might be repurposed for social surveillance. “This also implies that technical capabilities to monitor calls, creation, writing texts, or documents have been developed or are being developed,” he continued. “For instance, in search of illegal, harmful, hateful, or otherwise undesirable or nefarious content — according to some’s standards—such content may already exist.”

The model in question has the potential to exhibit a warning, among other features. “Or obstruct the capacity to proceed,” Olejnik further emphasized. “Alternatively, report it elsewhere.” The influence of technology on social behavior or something comparable. This significantly jeopardizes privacy and a variety of fundamental liberties and values. “Pre-existing capabilities are in place.”

Olejnik addressed TechCrunch with additional concerns: “While I cannot view the technical intricacies, Google guarantees that the detection process will be executed locally on the device. This dramatically protects user privacy. Much more, nevertheless, is at stake than mere seclusion. This demonstrates how software and operating system-integrated AI/LLMs can detect or regulate various human activities.

“It has been fortunately for the better thus far.” But what lies ahead if the requisite technical capability is present and integrated? Such potent characteristics indicate potential future hazards associated with the ability to selectively or on a large scale regulate the behavior of societies. This is probably one of the most perilous information technology capabilities that has ever been created. Furthermore, we are approaching that juncture. How shall this be governed? “Do we have gone too far?”

Associate Professor of Technology Law at UCL Michael Veale further cautioned against function-creep resulting from Google’s conversation-scanning AI in a response post on X: “Google establishes an infrastructure for on-device client-side scanning for additional purposes, which legislators and regulators will seek to exploit.”

Privacy specialists in Europe are especially concerned about the following: Since 2022, the European Union has deliberated on a contentious legislative proposal regarding message scanning. Critics, including the bloc’s own Data Protection Supervisor, caution that this proposal could potentially cause a significant shift in democratic rights in the region, as it would require platforms to scan private messages automatically.

Although the present legislative proposal purports to be impartial towards technology, it is generally anticipated that platforms would implement client-side scanning to comply with purported detection orders requiring them to identify both recognized and unknown CSAM and detect grooming activity in real-time.

Hundreds of privacy and security experts cautioned in an open letter published earlier this month that the plan could result in millions of false positives per day due to the unproven, deeply flawed, and attack-prone nature of the client-side scanning technologies that platforms are likely to implement in response to a court order.

Concerns were raised that Google’s conversation-scanning AI could compromise individuals’ privacy. The company has yet to respond to the inquiries as of press time.

Edwin Aboyi

Edwin Aboyi is a product designer, writer, and illustrator with a degree in Biological Sciences from the University of Abuja. Passionate about merging technology with creativity, Edwin contributes to Protechbro.com by offering fresh perspectives on AI, Web3, and blockchain

Share
Published by
Edwin Aboyi

Recent Posts

German Govt Transfers 700 Bitcoin as Price Rises

During the recent market recovery, the German government transferred an additional 700 Bitcoin, estimated to…

25 mins ago

Bitcoin Mining Difficulty Drops 5% to 79T

Bitcoin mining difficulty dropped 5% to 79.5T, the lowest since March, and the drop in…

1 hour ago

Ripple CLO Blasts US for Lack of Clear Crypto Regulations

Ripple's CLO criticizes the US for lacking clear crypto regulations, stressing the need for robust…

6 hours ago

Bitcoin ETFs Surge on $140M Inflows as Investors Buy the Dip

The largest cryptocurrency in the world( Bitcoin ETF) has experienced a very slight increase in…

6 hours ago

Crypto Guru Champions Bitcoin, Dismisses Meme Coins

Meme coins spark debate in the crypto community, with some seeing potential for profit and…

7 hours ago

Taiwan Mobile Joins Crypto Market

Taiwan Mobile, the second-largest telecommunications company, has successfully entered the crypto market, contributing its expertise…

1 day ago