Technology

OpenAI Shuts Down Election Influence Operation That Used ChatGPT

Published

on

OpenAI reportedly banned a gaggle of ChatGPT accounts linked to an Iranian influence operation that was generating content related to the US presidential election blog post on Friday. The company says the operation created AI-generated articles and social media posts, though it doesn’t appear to have reached a big audience.

This isn’t the primary time OpenAI has banned accounts linked to state-linked actors who maliciously use ChatGPT. In May, the corporate disrupted five campaigns using ChatGPT to govern public opinion.

These episodes are harking back to state actors using social media platforms like Facebook and Twitter to attempt to influence previous election cycles. Now, similar groups (or perhaps the identical ones) are using generative AI to flood social media feeds with disinformation. Like social media corporations, OpenAI appears to be taking a whack-a-mole approach, banning accounts related to these activities as they arise.

OpenAI says its investigation into this group of accounts benefited from Microsoft Threat Intelligence Report published last week that identified the group (dubbed Storm-2035) as a part of a broader campaign to influence the U.S. election that has been ongoing since 2020.

Microsoft said Storm-2035 is an Iranian network with multiple sites imitating news outlets and “actively engaging American voter groups on opposite ends of the political spectrum with polarizing messages on topics such as U.S. presidential candidates, LGBTQ rights, and the Israel-Hamas conflict.” The playbook, as has been shown in other operations, will not be necessarily about promoting one policy or one other, but about sowing dissent and conflict.

OpenAI identified five front sites for Storm-2035, presenting themselves as each progressive and conservative news sites with convincing domains like “evenpolitics.com.” The group used ChatGPT to jot down several long-form articles, including one alleging that “X is censoring Trump’s tweets,” something Elon Musk’s platform definitely hasn’t done (if anything, Musk is encouraging former President Donald Trump to get more involved with X).

Example of a fake news site that publishes content generated by ChatGPT.
Image sources: OpenAI

On social media, OpenAI identified greater than a dozen X accounts and one Instagram account controlled by the operation. The company says ChatGPT was used to transcribe various political commentaries that were then posted on those platforms. One of those tweets falsely and misleadingly claimed that Kamala Harris attributed the “rising cost of immigration” to climate change, followed by “#DumpKamala.”

OpenAI says it has seen no evidence that Storm-2035’s articles were widely shared, and noted that almost all of its social media posts received few to no likes, shares, or comments. This is commonly the case for these operations, which could be quickly and cheaply launched using AI tools like ChatGPT. Expect to see many more such notifications because the election approaches and partisan bickering intensifies online.

This article was originally published on : techcrunch.com

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version