Site icon Protechbro: Top Stories on Bitcoin, Ethereum, Web3, & Blockchain

Texas Attorney General Accuses Meta, Character AI

Texas Attorney General Accuses Meta, Character AI

Texas attorney general accuses Meta and Character AI of misleading children with mental health claims tied to their platforms.

Ken Paxton, the attorney general of Texas, has initiated an inquiry into Meta AI Studio and Character.

A press release issued on Monday accused AI of “potentially engaging in deceptive trade practices and misleadingly marketing themselves as mental health tools.”

Paxton states, “In the current digital era, we must continue to advocate for protecting Texas children from deceptive and exploitative technology.”

“AI platforms have the potential to deceive vulnerable users, particularly children, into believing that they are receiving legitimate mental health care by assuming the role of emotional support sources.”

Texas Attorney General Accuses Meta, Character AI
Ken Paxton, the attorney general of Texas |Source: Houston Public Media

In reality, they are frequently provided with recycled, generic responses designed to correspond with the personal data that has been harvested and presented as therapeutic advice.

The investigation was initiated by Senator Josh Hawley a few days after a report was released that revealed Meta’s AI chatbots were engaging in inappropriate behavior with minors, including flirting.

Texas Attorney General Accuses Meta, Character AI
Senator Josh Hawley |Source: Wikipedia

The Texas Attorney General’s office has accused Meta and Character.AI of developing AI personas that are presented as “professional therapeutic tools,” despite the absence of appropriate medical credentials or oversight.

One user-created bot, Psychologist, has garnered significant demand among the startup’s youthful users, despite the millions of AI personas available on Character.AI.

In the interim, Meta does not provide therapy bots for children; however, it does not impede children from utilizing the Meta AI chatbot or one of the personas developed by third parties for therapeutic purposes.

Ryan Daniels, a spokesperson for Meta, stated to TechCrunch, “We explicitly label AIs and include a disclaimer that AI generates responses — not people— to assist individuals in comprehending their limitations.”

“Our models are intended to direct users to qualified medical or safety professionals when appropriate, as these AIs are not licensed professionals.”

Nevertheless, TechCrunch observed that many children may disregard or not comprehend these disclaimers. We have inquired about Meta’s additional measures to safeguard minors when employing its chatbots.

For its part, Character incorporates conspicuous disclaimers into each chat to remind users that a “Character” is not a real person and that all of their statements should be considered fiction, as stated by a Character spokesman for artificial intelligence.

She observed that the startup includes supplementary disclaimers when users construct characters with the terms “psychologist,” “therapist,” or “doctor” to warn them not to rely on them for any form of professional advice.

Additionally, Paxton noted in his statement that, even though AI chatbots claim confidentiality, their “terms of service” indicate that user interactions are logged, tracked, and exploited for algorithmic development and targeted advertising. This raises serious concerns about data abuse, false advertising, and privacy violations.

Meta captures prompts, feedback, and other interactions with AI chatbots and across Meta services to “improve AIs and related technology,” as stated in its privacy policy.

The policy does not expressly address advertising; however, it does state that information may be shared with third parties, such as search engines, to provide “more personalized outputs.” This effectively translates to targeted advertising, given Meta’s ad-based business model.

Personality. AI’s privacy policy also emphasizes the company’s logging of identifiers, demographics, location information, and additional user data, such as browsing behavior and app usage platforms.

It monitors users’ activity across advertisements on TikTok, YouTube, Reddit, Facebook, Instagram, and Discord, some of which may be associated with their accounts. This data is utilized to train AI, customize the service to individual preferences, and deliver targeted advertising, which includes the exchange of data with advertisers and analytics providers.

Character. The AI spokesperson stated that the startup is “just beginning to explore targeted advertising on the platform” and that that exploration has not involved using chat content on the platform.

Additionally, the spokesperson verified that the identical privacy policy is enforced for all users, including adolescents.

TechCrunch has inquired with Meta whether such tracking is conducted on minors, and we will provide an update to this story upon receiving a response.

Meta and Character assert that their services are not for minors under 13. However, Meta has been criticized for its failure to monitor accounts created by minors under 13, and Character’s kid-friendly characters are intended to appeal to younger users.

Karandeep Anand, the CEO of the startup, has even disclosed that his six-year-old daughter employs the platform’s chatbots under his supervision.

Legislation such as the Kids Online Safety Act (KOSA) safeguards against this type of data collection, algorithmic exploitation, and targeted advertising.

Last year, KOSA was poised to pass with robust bipartisan support; however, it was halted due to significant opposition from lobbyists from the technology industry.

Meta, in particular, implemented an impressive lobbying apparatus, cautioning legislators that the bill’s extensive mandates would undermine its business model.

In May 2025, Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT) reintroduced KOSA to the Senate.

Paxton has issued civil investigative demands to the corporations to ascertain whether they have violated Texas consumer protection laws. These demands are legal orders that necessitate the production of documents, data, or testimony during a government investigation.

Exit mobile version