Legislation Introduced to Address Deepfake Dangers

Legislation Introduced to Address Deepfake Dangers

In a bipartisan effort to combat the dangers of deepfake technology, legislation has been introduced in the House that would require the identification and labeling of online images, videos, and audio generated using artificial intelligence (AI). Deepfakes are AI-generated content that can be indistinguishable from genuine material. These technologies have been used to mimic the voices of public figures, exploit the likenesses of celebrities, and even impersonate world leaders. The potential for misinformation, sexual exploitation, consumer scams, and a loss of trust is a growing concern.

Under the proposed legislation, AI developers would be required to tag content produced by their technologies with digital watermarks or metadata to enable identification. Online platforms like TikTok, YouTube, and Facebook would then have to label the content to notify users that it was generated using AI. The exact details of the rules would be determined by the Federal Trade Commission, with input from the National Institute of Standards and Technology.

The legislation aims to address the issue of deepfakes head-on. Rep. Anna Eshoo, a Democratic sponsor of the bill, emphasized the importance of providing the American people with the ability to distinguish deepfakes from genuine content. She stated, “To me, the whole issue of deepfakes stands out like a sore thumb. It needs to be addressed, and in my view, the sooner we do it, the better.”

If passed, the bill would complement voluntary commitments by tech companies and an executive order on AI signed by President Joe Biden last fall. The executive order directed federal agencies, including the National Institute of Standards and Technology, to establish guidelines for AI products and required AI developers to share information about the risks associated with their products.

The introduction of this legislation is just one step in addressing the concerns surrounding AI. Both Republicans and Democrats agree that regulation is necessary to protect citizens while also allowing the field of AI to continue developing for the benefit of industries like healthcare and education. However, it is unlikely that meaningful rules for AI will be passed in time for them to take effect before the 2024 election.

Several organizations and AI developers have expressed support for the bill, seeing it as progress in safeguarding AI technologies. Margaret Mitchell, Chief AI Ethics Scientist at Hugging Face, a company that has developed a ChatGPT rival called Bloom, praised the bill’s focus on embedding identifiers in AI content through watermarking. She believes this will help the public regain control over the role of AI-generated content in society, as it becomes increasingly challenging to discern between AI-created and human-generated material.

The bill is now set to be reviewed by lawmakers, with the hope of ultimately protecting consumers, children, and national security by requiring the identification of deepfakes. Republican Rep. Neal Dunn, another sponsor of the bill, described the identification of deepfakes as a “simple safeguard” that will benefit society as a whole.

As the field of AI continues to advance, the regulation of technologies like deepfakes becomes crucial. This bipartisan effort demonstrates the recognition of the risks associated with AI and the commitment to finding a balance between innovation and safeguarding society.


Written By

Jiri Bílek

In the vast realm of AI and U.N. directives, Jiri crafts tales that bridge tech divides. With every word, he champions a world where machines serve all, harmoniously.