Software engineer arrested for AI-generated child sexual abuse images

Software engineer arrested for AI-generated child sexual abuse images

In a shocking and disturbing turn of events, a software engineer in the US has been arrested for allegedly creating and distributing thousands of AI-generated images of child sexual abuse material (CSAM). Steven Anderegg, a 42-year-old individual, is said to have utilized a text-to-image generative artificial intelligence (GenAI) model called Stable Diffusion to produce these horrifying images.

According to court documents, many of these AI-generated images depicted nude or partially clothed minors engaging in explicit sexual acts or lasciviously displaying their genitals. Additionally, Anderegg had reportedly communicated with a 15-year-old boy, sharing his process for creating these disturbing images and even sending some to the minor through Instagram direct messages.

This heinous act came to the attention of law enforcement when Instagram reported Anderegg’s account for distributing CSAM, leading to a CyberTip from the National Center for Missing and Exploited Children (NCMEC). In response to this arrest, Deputy Attorney General Monaco stated, “Technology may change, but our commitment to protecting children will not. CSAM generated by AI is still CSAM, and we will hold accountable those who exploit AI to create obscene, abusive, and increasingly photorealistic images of children.”

This case is one of the first instances where the FBI has charged an individual for utilizing AI in the creation of child sexual abuse material. Principal Deputy Assistant Attorney General Nicole Argentieri emphasized the illegality of using AI for such purposes, stating, “Today’s announcement sends a clear message: using AI to produce sexually explicit depictions of children is illegal, and the Justice Department will not hesitate to hold accountable those who possess, produce, or distribute AI-generated child sexual abuse material.”

Anderegg now faces four counts of creating, distributing, and possessing child sexual abuse material, as well as sending explicit material to a child under 16. If convicted, he could face a maximum sentence of approximately 70 years in prison. The seriousness of these charges underscores the urgent need to address and combat the use of AI in the creation of CSAM.

The National Center for Missing and Exploited Children (NCMEC) expressed deep concern over this troubling trend, as bad actors can use artificial intelligence to generate deepfaked sexually explicit images or videos based on any photograph of a real child. The NCMEC’s report highlights the danger of AI-generated CSAM, which can depict computer-generated children involved in disturbing and graphic sexual acts.

As society continues to grapple with the advancement and implications of AI technology, it is crucial that legislation and law enforcement agencies stay vigilant in identifying and prosecuting those who misuse AI for criminal purposes. The arrest of Steven Anderegg serves as a stark reminder that the ethical use of AI needs to be prioritized, especially in protecting the well-being and safety of our most vulnerable individuals—our children.


Written By

Jiri Bílek

In the vast realm of AI and U.N. directives, Jiri crafts tales that bridge tech divides. With every word, he champions a world where machines serve all, harmoniously.