Researchers Develop AntiFake Tool

Researchers Develop AntiFake Tool

When it comes to deepfakes, the line between reality and fiction is becoming increasingly blurred. Just ask Scarlett Johansson, who recently took legal action against an AI app maker for using her voice and face without her consent. Deepfakes, or manipulated media that convincingly imitates real people, have become a rampant issue on the internet. In fact, according to two recent AI surveys, about half of the respondents couldn’t distinguish between synthetic and human-generated content.

This poses a particular problem for celebrities, who constantly find themselves playing a game of whack-a-mole with AI bots. But now, researchers at Washington University in St. Louis are developing a new tool called AntiFake that could help combat deepfake abuses. “Generative AI has become such an enabling technology that we think will change the world,” said Ning Zhang, assistant professor of computer science and engineering at the university. “However, when it’s being misused, there has to be a way to build up a layer of defense.”

AntiFake works by scrambling the audio signal in a way that confuses AI-based synthesis engines, making it difficult for them to generate effective copies. Zhang compares it to the University of Chicago’s Glaze, a similar tool aimed at protecting visual artists from having their works scraped for generative AI models. The modified audio track still sounds normal to the human ear, but it sounds distorted to the system. Zhang’s research team will present AntiFake at a major security conference in Denmark later this month, although it remains to be seen how it will scale.

While AntiFake offers a proactive approach to protecting users' speech, there are other solutions available. Deepfake detection technologies, such as Google’s SynthID and Meta’s Stable Signature, embed digital watermarks in video and audio to help users identify AI-generated content. Companies like Pindrop and Veridas have developed technologies that analyze tiny details, such as how words sync up with a speaker’s mouth, to determine if something is fake.

However, Siwei Lyu, a computer science professor at the University of Buffalo, notes that these solutions only work on content that has already been published. Unauthorized videos can exist online for days before being flagged as deepfakes. “Even if the gap between this thing showing up on social media and being determined to be AI-generated is only a couple of minutes, it can cause damage,” Lyu said.

Rupal Patel, a professor of applied artificial intelligence at Northeastern University, stresses the balance between protecting against deepfake abuses and allowing the positive potential of generative AI to flourish. Patel notes that generative AI can do amazing things, such as helping people who have lost their voices speak again. However, she emphasizes the importance of consent in preventing deepfake abuses.

In fact, the U.S. Senate is currently discussing a bipartisan bill called the “NO FAKES Act of 2023” that would hold deepfake creators liable for using people’s likenesses without authorization. The bill aims to establish a uniform federal law to protect individuals' right of publicity, which currently varies from state to state. However, a federal law may still be years away.

As the battle against deepfakes continues, it is crucial to strike a balance between safeguarding against abuses and preserving the potential of generative AI. With new tools like AntiFake on the horizon, individuals may have a better chance of protecting their voices and identities in a world where AI is increasingly adept at imitating reality.


Written By

Jiri Bílek

In the vast realm of AI and U.N. directives, Jiri crafts tales that bridge tech divides. With every word, he champions a world where machines serve all, harmoniously.