White House and Tech Giants Take Action Against Deepfake Crisis

White House and Tech Giants Take Action Against Deepfake Crisis

The White House and tech giants are joining forces to combat the crisis of deepfake technology, specifically focusing on the creation and dissemination of sexually explicit AI-generated images. In recent years, advances in generative AI tools have made it increasingly easy to manipulate someone’s likeness and create realistic deepfake images that are then shared across various platforms. The victims of these deepfakes, whether they are celebrities or children, often have little recourse to prevent their images from being used inappropriately.

Acknowledging the urgent need to address this issue, President Joe Biden’s administration is calling on companies in the tech industry and financial institutions to take voluntary action in the absence of federal legislation. The goal is to curb the creation, spread, and monetization of nonconsensual AI-generated images, particularly explicit images involving children. The White House is looking for commitments from AI developers, payment processors, financial institutions, cloud computing providers, search engines, and mobile app store gatekeepers like Apple and Google.

Arati Prabhakar, Biden’s chief science adviser and director of the White House’s Office of Science and Technology Policy, emphasized the severity of the issue, stating, “As generative AI broke on the scene, everyone was speculating about where the first real harms would come. And I think we have the answer. We’ve seen an acceleration because of generative AI that’s moving really fast. And the fastest thing that can happen is for companies to step up and take responsibility.”

The administration’s call to action involves disrupting the monetization of image-based sexual abuse, limiting payment access to sites that advertise explicit images of minors. It also includes measures for cloud service providers and mobile app stores to regulate web services and applications that are marketed for the purpose of creating or altering sexual images without individuals' consent. Additionally, the document emphasizes the importance of aiding survivors in removing both AI-generated and real explicit images from online platforms.

The issue of deepfake technology and its impact on individuals is not new. One of the most widely known victims of pornographic deepfake images is Taylor Swift, whose fanbase rallied against the abusive AI-generated images that circulated on social media. Microsoft, whose AI visual design tool was traced back to some of the Swift images, promised to strengthen its safeguards in response. Schools are also dealing with the issue, as AI-generated deepfake nudes of students have been found, with some cases involving fellow teenagers creating and sharing such manipulated images.

The Biden administration has previously worked with major technology companies to establish safeguards on new AI systems and signed an executive order to guide the safe development of AI while acknowledging the emerging problem of AI-generated child abuse imagery. However, the administration recognizes the need for legislation to support these safeguards fully.

While the voluntary commitments from companies are an important step, Jen Klein, director of the White House Gender Policy Council, notes that they do not replace the need for congressional action. Existing laws already prohibit the creation and possession of sexual images of children, regardless of their authenticity. Recently, federal prosecutors charged a Wisconsin man for using an AI image-generator to create thousands of realistic AI-generated images of minors engaged in sexual conduct.

One of the challenges in addressing this crisis is the lack of oversight over the tech tools and services that enable the creation of these deepfake images. Some of these tools are hosted on commercial websites with little information about their creators or technology. High-profile examples, such as the AI database LAION, which has been used to train AI image-makers, including Stable Diffusion, have been found to contain thousands of images of suspected child sexual abuse.

Arati Prabhakar highlights that the problem extends beyond open-source AI technology, stating, “It’s a broader problem. Unfortunately, this is a category that a lot of people seem to be using image generators for. And it’s a place where we’ve just seen such an explosion. But I think it’s not neatly broken down into open-source and proprietary systems.”

As the deepfake crisis worsens, the urgent call to action from the White House and tech giants is a crucial step towards combatting the harmful effects of AI-generated abusive images. With the voluntary cooperation of companies and the implementation of stricter regulations, it is hoped that the creation, spread, and monetization of deepfakes can be diminished, providing a measure of protection for individuals, particularly women, girls, and minors who are most vulnerable to these abuses.


Written By

Jiri Bílek

In the vast realm of AI and U.N. directives, Jiri crafts tales that bridge tech divides. With every word, he champions a world where machines serve all, harmoniously.