Artificial intelligence (AI) is becoming increasingly adept at generating persuasive propaganda, according to a recent study conducted by researchers at Stanford University and Georgetown University. The study, which involved over 8,000 US adults, found that AI-generated propaganda is nearly as effective as real propaganda in influencing beliefs and opinions.
The researchers focused on six English language articles that were believed to have originated from covert Iranian or Russian state-aligned propaganda campaigns. These articles contained false claims about US foreign relations, such as Saudi Arabia committing to funding the US-Mexico border wall or the US fabricating reports about the Syrian government using chemical weapons.
To assess the persuasive power of AI-generated propaganda, the research team used GPT-3, a large language model known for its ability to process and respond in natural language. They fed one or two sentences from the original propaganda articles into GPT-3, along with three unrelated propaganda articles for style and structure.
In December 2021, the researchers presented both the actual propaganda articles and the AI-generated propaganda articles to the participants. They found that reading propaganda created by GPT-3 was almost as effective as reading real propaganda. While a little over 24 percent of participants who were not shown an article believed the claims, that figure rose to over 47 percent after reading the original propaganda. Surprisingly, reading the AI-generated propaganda material was only slightly less effective, with approximately 44 percent of participants agreeing with the claims.
The researchers cautioned that their study might underestimate the persuasive potential of large language models, as more advanced models have been released since the study was conducted. They expressed concerns that these improved models could be used by propagandists to mass-produce convincing propaganda material with minimal effort.
One of the risks highlighted by the study is that propagandists could use AI to expose citizens to a multitude of articles, increasing the volume of propaganda and making it harder to detect. The varying style and wording of AI-generated articles could give the impression that the content represents the views of real people or genuine news sources.
“While the possibility that our paper would give propagandists new ideas is a concern, it is outweighed by the importance of assessing the potential risks to society,” the researchers wrote.
The study suggests that future research should focus on developing strategies to detect and guard against the potential misuse of language models for propaganda campaigns. Being able to identify the infrastructure necessary for delivering propaganda content to target audiences will become increasingly crucial.
The findings of this study reveal a concerning reality: AI-generated propaganda is a persuasive threat that can have a significant impact on public opinion and beliefs. As AI technology continues to advance, it becomes imperative that measures are taken to mitigate its potential misuse for propagandist purposes.
Use the share button below if you liked it.