In a groundbreaking report, OpenAI has exposed the use of artificial intelligence (AI) in online influence operations conducted by bad actors in Russia, China, Iran, and Israel. These actors have utilized OpenAI’s powerful tool, ChatGPT, to generate a range of content, from social media comments to fake account biographies and even images and cartoons. OpenAI, a prominent player in the field of AI, released this report as their first of its kind, shedding light on the widespread use of AI in manipulation efforts. ChatGPT itself has amassed over 100 million users since its launch in November 2022.
While AI tools have certainly assisted in creating more content with fewer errors and appearing more engaging, OpenAI’s report highlights that these influence operations have failed to gain significant traction with real people or reach large audiences. In fact, their posts have often been called out as fake by users. Ben Nimmo, principal investigator on OpenAI’s intelligence and investigations team, remarks, “These operations may be using new technology, but they’re still struggling with the old problem of how to get people to fall for it.”
This finding is in line with Meta’s quarterly threat report, published just recently. The report indicates that covert operations, some of which have been dismantled by Meta, have also used AI to generate images, videos, and text. However, the cutting-edge use of AI has not hampered Meta’s ability to disrupt these manipulation efforts. The proliferation of generative AI, capable of producing realistic audio, video, images, and text, presents new opportunities for fraud, scams, and manipulation.
The potential for AI-generated fakes to disrupt elections has become a significant concern. As billions of people worldwide head to the polls, including in the U.S., India, and the European Union, the fear of manipulation looms large. In response, OpenAI has banned accounts linked to five covert influence operations within the past three months. OpenAI defines these operations as attempts to manipulate public opinion or influence political outcomes while concealing the true identity or intentions of the actors behind them.
Two well-known operations, Doppelganger and Spamouflage, have been recognized by social media companies and researchers. Doppelganger, associated with the Kremlin by the U.S. Treasury Department, specializes in spoofing legitimate news websites to undermine support for Ukraine. Spamouflage, described as the largest covert influence operation ever disrupted by Meta, operates across various social media platforms and internet forums, disseminating pro-China messages and attacking critics of Beijing. Both operations have employed OpenAI tools to generate comments in multiple languages and publish them on social media sites. The Russian network has gone a step further, using AI to translate articles and website content.
In addition to these well-known operations, OpenAI has also uncovered a previously unreported Russian network focused on spamming the messaging app Telegram. OpenAI found that this network utilized AI to debug code for an automatic posting program on Telegram and generate comments from its accounts. Like Doppelganger, this operation aimed to undermine support for Ukraine by involving itself in political discussions concerning the U.S. and Moldova.
Another campaign traced back to a political marketing firm in Tel Aviv called Stoic. This campaign involved fake accounts posing as Jewish students, African-Americans, and concerned citizens, primarily targeting audiences in the U.S., Canada, and Israel. These fake accounts weighed in on the war in Gaza, praised Israel’s military, and criticized college antisemitism and the U.N. relief agency for Palestinian refugees. Meta has since banned Stoic from its platforms and sent the company a cease and desist letter. OpenAI discovered that the Israeli operation employed AI to generate and edit articles, comments, and fictitious personas for fake accounts, utilizing platforms such as Instagram, Facebook, and X. Additionally, OpenAI found evidence of activity from this network targeting elections in India.
Contrary to popular belief, none of the influence operations disrupted by OpenAI solely relied on AI-generated content. Ben Nimmo emphasizes that these actors combined human and AI efforts. While AI does offer some advantages, such as increased content production and improved translations, it does not address the primary challenge of distribution. Nimmo states, “You can generate the content, but if you don’t have the distribution systems to land it in front of people in a way that seems credible, then you’re going to struggle getting it across.”
OpenAI’s report underscores the need for continued vigilance in the fight against manipulation. As Nimmo warns, “This is not the time for complacency. History shows that influence operations, which spent years failing to get anywhere, can suddenly break out if nobody’s looking for them.” By staying proactive and deploying comprehensive strategies, societies can effectively combat the misuse of AI in influence operations and safeguard democratic processes.
Use the share button below if you liked it.