In a groundbreaking move to combat the rising threat of deepfakes in elections, an Arizona state representative has utilized artificial intelligence (AI) technology to craft legislation. Alexander Kolodin, a Republican, enlisted the help of an AI chatbot named ChatGPT to define the term “deepfake” for the bill. The legislation, which passed unanimously in both chambers and was signed by the Democratic governor, grants candidates and residents the ability to seek a judge’s declaration on the authenticity of a deepfake. This law offers candidates a means to debunk AI-generated misinformation, preventing potential harm to their campaigns.
Kolodin, who admittedly lacks expertise in computer science, turned to ChatGPT to assist in defining the technical aspects of deepfakes. In an interview, he shared, “I thought to myself, well, let me just ask the subject matter expert. And so I asked ChatGPT to write a definition of what was a deepfake.” Kolodin provided a screenshot of ChatGPT’s response, which aligns closely with the language incorporated into the bill’s definition.
The use of AI in political campaigns has become a pressing concern, prompting calls for government regulation. While the federal government has yet to take action, several states have proposed bills to address deepfakes. Kolodin’s approach, however, deviates from outright prohibition or restriction. Instead, the legislation seeks to involve the courts in determining the veracity of deepfakes. According to Kolodin, removing deepfakes entirely would be futile and raise First Amendment issues. Instead, candidates can seek a court’s declaration, enabling them to utilize that ruling for counter messaging.
Alongside addressing deepfakes, the bill also tackles the issue of disclaimers. Rather than mandating disclaimers, as seen in other states, the legislation states that potential court actions would be dismissed if the publisher had conveyed the image or video was a deepfake or if it would be evident to a reasonable person that it was not real. Kolodin expressed concerns about disclaimers impeding freedom of speech and potentially diluting the impact of certain messages. He cited an example where a deepfake video of a politician was clearly a satire, and a prescribed label could have diminished its journalistic impact.
Kolodin is optimistic that his bill will serve as a model for other states. He emphasizes the importance of balancing regulation with preservation of speech rights. “I think deepfakes have a legitimate role to play in our political discourse,” he said. By offering a mechanism to address the issue without infringing on speech, Kolodin hopes to prevent well-intentioned regulations from stifling freedom of expression.
As the use of AI in elections continues to evolve, this innovative legislation showcases the potential for technology to be harnessed for the greater good. By leveraging AI in the lawmaking process, Kolodin has not only accelerated the drafting of bills but also demonstrated the benefits of collaboration between humans and machines. As deepfake technology matures, the need for comprehensive regulations becomes increasingly pressing, and the success of this Arizona bill may lay the groundwork for future legislative efforts nationwide.
Use the share button below if you liked it.