AI Tools Fueling Online Influence Operations, Reports OpenAI AI Tools Fueling Online Influence Operations, Reports OpenAI
Online influence operations based in Russia, China, Iran, and Israel are increasingly using AI to manipulate the public, according to a... AI Tools Fueling Online Influence Operations, Reports OpenAI

Online influence operations based in Russia, China, Iran, and Israel are increasingly using AI to manipulate the public, according to a new report from OpenAI. The report highlights how bad actors have utilized OpenAI’s tools, including ChatGPT.

This includes using these tools to generate social media comments in multiple languages, create fake accounts with fabricated names and bios, produce cartoons and other images, and even debug code for scripted automation features.

A first of its kind, the report from OpenAI marks a significant step in shining light on the misuse of AI tools. Since its public launch in November 2022, ChatGPT has gained over 100 million users, which in turn, turned the start-up into a leading player in the AI industry.

However, despite the enhanced content production and reduced error rates offered by AI, these influence operations have not achieved significant traction with real audiences. Many posts received minimal authentic engagement, often being called out as fake by users.

These operations may be using new technology, but they’re still struggling with the old problem of how to get people to fall for it,” said Ben Nimmo, principal investigator on OpenAI’s intelligence and investigations team.

This sentiment echoes findings from Facebook owner Meta’s quarterly threat report, which noted that while some covert operations used AI to generate content, the advanced technology did not hinder Meta’s ability to disrupt manipulative efforts.

The rise of generative AI, capable of producing realistic audio, video, images, and text, has opened new avenues for fraud, scams, and manipulation. The potential for AI-generated fakes to disrupt elections is a growing concern as billions of people prepare to vote this year, including in the U.S., India, and the European Union.

In the past three months, OpenAI banned accounts linked to five covert influence operations. These operations aim to manipulate public opinion or political outcomes without disclosing the true identity or intentions of the actors involved. Notable among these are Russia’s Doppelganger and China’s Spamouflage, both well-known to social media companies and researchers.

Doppelganger, linked to the Kremlin by the U.S. Treasury Department, is notorious for spoofing legitimate news websites to undermine support for Ukraine. Spamouflage operates across numerous social media platforms and internet forums, promoting pro-China messages and attacking Beijing’s critics.

Both networks utilized OpenAI tools to generate multilingual comments posted across social media. Doppelganger also used AI to translate articles into English and French and to convert website articles into Facebook posts.

Spamouflage accounts used AI to debug code for a website targeting Chinese dissidents, analyze social media posts, and research news and current events. Many of their posts received replies only from other fake accounts within the same network.

Another previously unreported Russian network, banned by OpenAI, focused on spamming the messaging app Telegram. This network used AI to debug code for an automated posting program and to generate comments for its accounts. Like Doppelganger, its efforts aimed at undermining support for Ukraine, with posts addressing politics in the U.S. and Moldova.

Additionally, both OpenAI and Meta recently disrupted a campaign traced back to the Tel Aviv-based political marketing firm Stoic. Fake accounts posed as Jewish students, African-Americans, and concerned citizens, posting about the Gaza conflict, praising Israel’s military, and criticizing college antisemitism and the U.N. relief agency for Palestinian refugees.

These posts targeted audiences in the U.S., Canada, and Israel. Meta banned Stoic from its platforms and sent a cease and desist letter to the company.



ODSC gathers the attendees, presenters, and companies that are shaping the present and future of data science and AI. ODSC hosts one of the largest gatherings of professional data scientists with major conferences in USA, Europe, and Asia.