Deepfakes, the AI-generated videos and images that convincingly depict people saying or doing things they never actually did, have gained popularity this year. From Tom Hanks promoting dental plans to Pope Francis donning a stylish puffer jacket, these deepfakes have fascinated and entertained audiences around the world. However, as the U.S. presidential election looms in the near future, questions arise about the potential impact of deepfakes on political campaigns. With Google announcing plans to label AI-generated political advertisements that could manipulate a candidate’s voice or actions, lawmakers are now pressuring social media giants X (formerly Twitter), Facebook, and Instagram to follow suit.
Two Democratic members of Congress, U.S. Sen. Amy Klobuchar of Minnesota and U.S. Rep. Yvette Clarke of New York, have expressed “serious concerns” about the presence of AI-generated political ads on these platforms. They have sent a letter to Meta CEO Mark Zuckerberg and X CEO Linda Yaccarino, urging them to explain any rules they are developing to combat the harmful effects of deepfakes on free and fair elections. Klobuchar insists, “We are simply asking them, ‘Can’t you do this? Why aren’t you doing this?’ It’s clearly technologically possible.”
The lawmakers' letter warns of the potential dangers of a lack of transparency surrounding AI-generated content in political ads. With the 2024 elections approaching, the spread of election-related misinformation and disinformation across these platforms could have severe consequences on voters' perceptions of candidates and issues. X and Meta have yet to respond to the letter, but the pressure is mounting as Klobuchar and Clarke actively promote legislation to regulate AI-generated political ads.
Clarke’s House bill, introduced earlier this year, aims to amend a federal election law to require clear labels on election ads that contain AI-generated images or video. She emphasizes the need to prioritize transparency and ensure that the American people are aware that the content they encounter is fabricated. Klobuchar, who is sponsoring companion legislation in the Senate, believes that such measures are the bare minimum necessary.
In the absence of concrete legislation, the lawmakers hope that major social media platforms will take the lead in implementing their own guidelines. While Google has announced its plan to require disclaimers on AI-generated political ads altering people or events on its platforms, Facebook and Instagram’s parent company Meta has a policy restricting “faked, manipulated, or transformed” audio and imagery used for misinformation. A bipartisan Senate bill, co-sponsored by Klobuchar and Republican Sen. Josh Hawley of Missouri, goes further in banning “materially deceptive” deepfakes related to federal candidates, with exceptions for parody and satire.
The 2024 election has already witnessed the influence of AI-generated ads. The Republican National Committee aired an ad in April that depicted a dystopian vision of the United States if President Joe Biden were to be reelected. The ad featured realistic yet fake images of boarded-up storefronts, military patrols, and panic-inducing waves of immigrants. Klobuchar argues that under the proposed Senate bill’s regulations, such ads would likely be banned. Similarly, a fake image of Donald Trump embracing Dr. Anthony Fauci, used in an attack ad by Trump’s GOP primary opponent and Florida Gov. Ron DeSantis, would also be prohibited.
The potential for deepfakes to spread misinformation and mislead voters is a significant concern for Klobuchar. She points to a deepfake video from earlier this year that appeared to show Democratic Sen. Elizabeth Warren suggesting restrictions on Republicans voting. In a presidential race, these false statements attributed to candidates could have severe consequences, making it difficult for voters to distinguish truth from falsehood. Klobuchar, who chaired a Senate hearing on AI and the future of elections in September, believes that addressing these concerns is crucial for preserving the integrity of democratic processes.
While some skeptics argue that deepfakes have not yet played a substantial role in misleading voters, Klobuchar and Clarke remain committed to their cause. Clarke’s bill, if passed, would empower the Federal Election Commission (FEC) to enforce a disclaimer requirement similar to Google’s. The FEC has begun the process of potentially regulating deepfakes in political ads, with a petition from advocacy group Public Citizen open for public comment until October 16.
As the power and sophistication of AI continues to advance, the need for guidelines and regulations becomes increasingly urgent. Deepfakes pose a unique threat to the integrity of elections and the functioning of democracy. The questioning of social media platforms by lawmakers is an essential step in ensuring transparency and protecting voters from the spread of misinformation. The 2024 elections loom, and it is imperative that necessary measures are in place to safeguard the democratic process from the influence of AI-generated deepfakes.
Use the share button below if you liked it.