Social media platform X will test AI chatbots writing Community Notes.
Elon Musk has expanded Twitter’s Community Notes function under his ownership of X.
This fact-checking software allows users to provide context to posts, which other users review before posting.
A Community Note may accompany an AI-generated video that is unclear about its synthetic origins or a politician’s deceptive post.
When groups who disagreed on ratings agree, notes become public.
Community Notes’ success on X inspired Meta, TikTok, and YouTube to adopt similar projects. Meta dropped its third-party fact-checking services in favor of community-sourced labor.
AI chatbots as fact-checkers may be beneficial or destructive.
X’s Grok or other AI tools connected to X via API can generate these AI remarks. AI notes will be vetted the same as human notes to ensure correctness.
AIs often hallucinate or make up context, so using AI in fact-checking appears questionable.

A study published this week by X Community Notes researchers recommends that humans and LLMs cooperate.
Human feedback can improve AI note generation through reinforcement learning, with note raters checking notes before publication.
“The goal is not to create an AI assistant that tells users what to think, but to build an ecosystem that empowers humans to think more critically and understand the world better,” the report adds. ” LLMs and humans can work in a virtuous loop.”
Even with human inspections, overusing AI is risky, especially when users can implant third-party LLMs.
Recently, OpenAI’s ChatGPT had troubles with a sycophantic model. AI-generated comments may be wrong if an LLM emphasizes “helpfulness” over fact-checking.
The number of AI-generated remarks may overwhelm human raters, demotivating them to do this volunteer activity.
X will test AI-generated Community Notes for a few weeks before releasing them if they work.