Major technology companies, including Adobe, Amazon, Google, Meta (parent company of Facebook and Instagram), Microsoft, OpenAI, and TikTok, have pledged to adopt “reasonable precautions” to prevent artificial intelligence (AI) tools from being used to disrupt democratic elections. The tech executives gathered at the Munich Security Conference to announce a voluntary framework for how they will respond to AI-generated deepfakes that deliberately deceive voters. Thirteen additional companies, including IBM and Elon Musk’s X, are also signing on to the accord.
The accord recognizes that no single company, government, or civil society organization can tackle the challenges posed by AI technology alone. “Everybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own,” said Nick Clegg, president of global affairs for Meta, in an interview.
While the accord itself is largely symbolic, it highlights the increasing concern around realistic AI-generated images, audio, and video that can deceive or alter the appearance, voice, or actions of political candidates and other key figures. The companies are not committing to banning or removing deepfakes but are instead focusing on detecting and labeling deceptive AI content when it appears on their platforms. They will also share best practices and respond swiftly and proportionately when such content spreads.
The agreement comes at a crucial time, as more than 50 countries are set to hold national elections in 2024. Instances of AI-generated election interference have already been reported, such as AI robocalls mimicking U.S. President Joe Biden’s voice discouraging people from voting in the New Hampshire primary election last month.
While the accord seeks to address AI-generated deepfakes, it also acknowledges the importance of context and safeguarding various forms of expression, such as educational, documentary, artistic, satirical, and political content. The companies aim to be transparent about their policies on deceptive AI election content and educate the public about how to recognize and avoid falling for AI fakes.
Despite the accord, concerns remain that the commitments are vague and lack binding requirements, which may disappoint pro-democracy activists and watchdogs seeking stronger assurances. However, the companies involved have previously stated their commitment to implementing safeguards on their generative AI tools and labeling AI-generated content to inform users of its authenticity.
The absence of federal legislation regulating AI in politics in the U.S. has left it largely up to AI companies to govern themselves. However, states are exploring ways to regulate the use of AI, particularly in elections. The Federal Communications Commission has recently confirmed that AI-generated audio clips in robocalls are illegal, but this does not cover audio deepfakes shared on social media or in campaign ads.
The accord is a positive step toward addressing the threat of AI-generated deepfakes in elections, but experts caution that other forms of misinformation, both AI-generated and traditional “cheapfakes,” remain a significant threat. Content recommendation systems that prioritize engagement over accuracy are one aspect that social media companies should address to combat misinformation effectively.
In addition to the major platforms involved in the agreement, other signatories include chatbot developers Anthropic and Inflection AI, voice-clone startup ElevenLabs, chip designer Arm Holdings, security companies McAfee and TrendMicro, and Stability AI, known for its image-generator Stable Diffusion. Notably absent from the accord is Midjourney, another popular AI image-generator based in San Francisco.
The inclusion of Elon Musk’s X in the accord is surprising, given his previous position as a “free speech absolutist.” However, X CEO Linda Yaccarino emphasized the responsibility of all citizens and companies to safeguard free and fair elections and pledged X’s commitment to combat AI threats while protecting free speech and promoting transparency.
While the accord may not provide all the guarantees that pro-democracy activists and watchdogs seek, it represents a significant step forward in addressing the potential for AI-generated deepfakes to disrupt democratic elections. Through voluntary commitments and collaboration, tech giants and other companies are taking action to protect the integrity of elections worldwide.
Use the share button below if you liked it.