In a move to combat the potential misuse of artificial intelligence (AI) in political advertising, the Federal Communications Commission (FCC) has proposed new rules requiring political advertisers to disclose the use of AI-generated content in broadcast television and radio ads. The proposal aims to increase transparency as AI tools continue to advance rapidly and produce lifelike images, videos, and audio clips that have the potential to mislead voters.
FCC Chair Jessica Rosenworcel expressed the commission’s intention to ensure that consumers are fully informed about the use of AI in political ads. She stated, “Today, I’ve shared with my colleagues a proposal that makes clear consumers have a right to know when AI tools are being used in the political ads they see, and I hope they swiftly act on this issue.”
The proposal is the second initiative this year by the FCC to address the growing use of AI tools in political communications. Previously, the FCC banned AI voice-cloning tools in robocalls after an incident during New Hampshire’s primary election, where automated calls used voice-cloning software to imitate President Joe Biden and discourage voters.
If adopted, the proposal would require broadcasters to verify with political advertisers whether AI tools were used to generate their content. The FCC has authority over political advertising on broadcast channels under the 2002 Bipartisan Campaign Reform Act. Specific details of the proposal, such as the manner in which AI-generated content should be disclosed, remain to be discussed.
One challenge for the FCC is establishing a definition for AI-generated content. With the increasing integration of retouching tools and other AI advancements into creative software, it is crucial to differentiate between AI-generated and manually crafted content. The regulatory process would likely refine the initial definition put forth by Rosenworcel, which currently includes AI-generated voices that sound like human voices and AI-generated actors that appear to be human.
The proposal comes in response to the heavy experimentation of political campaigns with generative AI. The use of AI in building chatbots, creating videos, and fabricating images by political campaigns has raised concerns about the potential for misinformation and manipulation. Misleading AI-generated content has been used globally, including in India’s elections, where videos misrepresenting Bollywood stars were circulated to criticize the prime minister.
Lawmakers from both sides of the aisle have called for legislation to regulate the use of AI in politics. Senators Amy Klobuchar (Democrat) and Lisa Murkowski (Republican) introduced a bipartisan bill that would require political ads to carry a disclaimer if they were created or significantly altered using AI, with violations addressed by the Federal Election Commission.
While the FCC’s authority is limited in addressing AI-related threats, Rosenworcel hopes to establish transparency standards ahead of the 2024 election. A spokesperson for Rosenworcel acknowledged that the proposal represents the maximum transparency standards enforceable under the FCC’s jurisdiction, and calls for government agencies and lawmakers to build upon this crucial first step in regulating the use of AI in political advertising.
The proposed rules aim to keep pace with the accessibility and affordability of generative AI technology, ensuring that voters are better informed and protected against misleading and manipulated content. By requiring disclosure, the FCC intends to create a more transparent political advertising landscape and address the risks associated with the misuse of AI in shaping public opinion.
Use the share button below if you liked it.