According to a blog post published on Friday, OpenAI has prohibited the collection of ChatGPT accounts associated with an Iranian influence operation that produces content regarding the U.S. presidential election
The company claims that the operation generated AI-generated articles and social media postings; however, it appears the audience was not large.
This is not the first instance in which OpenAI has banned accounts associated with state-affiliated actors for maliciously utilizing ChatGPT. In May, the organization utilized ChatGPT to influence public opinion by disrupting five campaigns.
State actors have previously employed social media platforms such as Facebook and Twitter to attempt to influence previous election cycles, and these episodes are redolent of that experience. Comparable or possibly similar groups currently employ generative AI to inundate social media platforms with inaccurate information.
OpenAI appears to be employing a whack-a-mole strategy, similar to that of social media companies, by barring accounts associated with these endeavors as soon as they resurface.
OpenAI claims that a Microsoft Threat Intelligence report released last week facilitated its examination of this collection of accounts. The report identified the group, which it refers to as Storm-2035, as a component of a broader campaign to influence U.S. elections that has been in operation since 2020.
According to Microsoft, Storm-2035 is an Iranian network that operates multiple websites that mimic news agencies. The network is “actively engaging US voter groups on opposing ends of the political spectrum with polarizing messaging on issues such as the US presidential candidates, LGBTQ rights, and the Israel-Hamas conflict.”
As demonstrated in other operations, the playbook is not necessarily intended to advance one policy over another but to incite discord and contention.
OpenAI identified five website fronts for Storm-2035, which presented as progressive and conservative news agencies with convincing domain names such as “evenpolitics.com.” The group drafted Several long-form articles using ChatGPT, one of which claimed that “X censors Trump’s tweets.” However, Elon Musk’s platform has not done so (if anything, Musk is encouraging former president Donald Trump to engage more on X).
OpenAI identified a dozen X accounts and one Instagram account controlled by this operation on social media. The company claims that ChatGPT was employed to rephrase various political comments, which were subsequently published on these platforms. One of these tweets falsely and confusingly claimed that Kamala Harris blames “increased immigration costs” on climate change, followed by the hashtag “#DumpKamala.”
OpenAI stated that it did not observe any evidence of Storm-2035’s articles being extensively shared and observed that most of its social media posts received few to no likes, shares, or comments. This is frequently the case with these operations, which are easily and inexpensively implemented using AI tools such as ChatGPT.
As the election approaches and the partisan bickering on the internet intensifies, anticipate the appearance of numerous additional notices similar to this one.