Rise of Deepfake Audio: A Growing Concern for Election Manipulation
January 24, 2024
The world of disinformation experts is on high alert following the circulation of a doctored deepfake audio message supposedly from U.S. President Joe Biden. In the message, a voice altered to sound like Biden urged voters in New Hampshire not to cast their ballots in the upcoming Democratic primary. This incident has sparked concerns about the increasing use of deepfake technology in politics, specifically the manipulation of audio recordings to sway public opinion and undermine the integrity of elections.
What makes deepfake audio particularly worrisome is its ease of manipulation, low cost of production, and difficulty to trace. This combination of factors poses a significant challenge for election systems that are ill-prepared to combat this emerging threat. “The political deepfake moment is here,” warns Robert Weissman, president of the consumer advocacy think tank Public Citizen. He calls on lawmakers to implement protections against fake audio and video recordings to prevent “electoral chaos.”
While deepfake video and image generators have raised concerns in recent years, it is the rise of deepfake audio that has caught experts' attention. With a convincing phone message and access to a voter registration database, a malicious actor possesses a formidable weapon capable of swaying election outcomes. The use of artificial intelligence-powered video and image generators, coupled with the increasing investment in voice-cloning startups, emphasizes the urgency of addressing this issue.
The recent fake Biden message is not the first recorded incident of audio deepfakes in politics. Last year, ahead of Slovakia’s parliamentary elections, audio deepfakes were spread on social media platforms, including one that appeared to show party leader Michal Simecka discussing a plan to purchase votes. Campaigns are increasingly relying on AI software for mass communication, amplifying the potential impact of deepfake audio in the political landscape.
Determining the origins of the fake Biden message proves especially challenging as it was spread via telephone rather than online. According to Joan Donovan, an assistant professor of journalism and emerging media studies at Boston University, audio messages delivered by phone lack the same digital trail as those shared online. This further highlights the evolving nature of “dirty tricks” in the realm of disinformation.
The concerns go beyond influencing public opinion. The fake Biden clip revealed a disturbing trend: bad actors are now using deepfakes to discourage voter turnout altogether. Even if such misinformation only confuses a few hundred or thousands of voters, it can still have a meaningful impact on election outcomes, warns Nick Diakopoulos, a professor at Northwestern University specializing in manipulated audio and elections.
While the U.S. Federal Election Commission has taken some steps toward regulating political deepfakes, much more needs to be done. Some states have proposed their own laws to address this issue, and election officials are carrying out training exercises to prepare for potential attacks. However, deepfake detection tools are still in their early stages, and their effectiveness remains inconclusive.
As deepfake technology continues to advance, the need for robust protections and countermeasures becomes increasingly urgent. Failure to address this issue could undermine the very foundation of democratic processes, making elections vulnerable to manipulation and false information. With the stakes so high, it is up to lawmakers, researchers, and technology developers to collaborate and find effective solutions that will safeguard the integrity of our democratic systems.
Use the share button below if you liked it.