Warning about AI-Generated Child Sexual Abuse Images

Warning about AI-Generated Child Sexual Abuse Images

The Internet Watch Foundation (IWF) has issued a warning about the potential flood of AI-generated child sexual abuse images that could overrun the internet, causing immense harm to children and posing significant challenges to law enforcement agencies. Dan Sexton, the chief technology officer of the IWF, emphasized the urgency of addressing this issue, stating that “we’re not talking about the harm it might do. This is happening right now and it needs to be addressed right now.”

The report by the IWF highlights some disturbing instances where AI tools have been used to create and distribute explicit images of children. In one case in South Korea, a man was sentenced to prison for using AI to create virtual child abuse images. Additionally, there have been reports of teenagers using AI apps to make their peers appear nude in photos. These examples demonstrate the dark side of the development of generative AI systems that allow users to describe what they want to produce and have the system generate it.

The proliferation of deepfake child sexual abuse images poses serious challenges to law enforcement agencies worldwide. Besides overwhelming investigators trying to rescue real children who may turn out to be virtual characters, these images can also be used by perpetrators to groom and coerce new victims. The IWF’s analysis has revealed a high demand for the creation of new images using the faces of famous children and existing victims of abuse. Dan Sexton describes this trend as “incredibly shocking.”

The IWF became aware of the issue earlier this year and launched an investigation into dark web forums where abusers were trading tips on generating sexually explicit images. They found a growing amount of content being created, increasing the risk to victims. The IWF’s report emphasizes the need for governments to strengthen laws and regulations to combat the use of AI in generating child sexual abuse images. The European Union, in particular, is called upon to reconsider surveillance measures that can automatically scan messaging apps for suspected images of abuse.

Technology providers also have a role to play in addressing this issue. The report suggests that they should make it harder for their products to be used in this way, although this can be challenging due to the openness of some AI tools. While certain AI image-generators have mechanisms to block the creation of child abuse material, open-source tools like Stable Diffusion have been favored by abusers due to their lack of restrictions.

The IWF report comes ahead of a global AI safety gathering hosted by the British government, which aims to address concerns and promote discussions on the dark side of AI technology. Susie Hargreaves, CEO of the IWF, believes that it is crucial to raise awareness about the realities of this problem, stating, “while this report paints a bleak picture, I am optimistic.” It is essential to have open conversations about the challenges and dangers posed by AI-generated child sexual abuse images and work towards finding effective solutions.


Written By

Jiri Bílek

In the vast realm of AI and U.N. directives, Jiri crafts tales that bridge tech divides. With every word, he champions a world where machines serve all, harmoniously.