AI-generated images of child sexual abuse could flood the Internet.

Rate this post

The already alarming proliferation of child sexual abuse images on the Internet could get much worse if something is not done to rein in the artificial intelligence tools that generate deepfake photos, a watchdog agency warned Tuesday.

In a written report, the UK-based Internet Watch Foundation urges governments and technology providers to act quickly before a flood of AI-generated child sexual abuse images overwhelms enforcement investigators. the law and greatly expand the pool of potential victims.

"We're not talking about the damage it could cause," said Dan Sexton, the watchdog group's chief technology officer. "This is happening right now and it needs to be addressed right now."

In a first-of-its-kind case in South Korea, a man was sentenced in September to two and a half years in prison for using artificial intelligence to create 360-degree virtual images of child abuse, according to the Busan District Court in South Korea. southeast of the country. .

In some cases, children use these tools on each other. At a school in southwestern Spain, police have been investigating the alleged use of a phone app by teenagers to make their fully clothed schoolmates appear naked in photographs.

The report exposes a dark side to the race to build generative AI systems that allow users to describe in words what they want to produce – from emails to artwork to novel videos – and have the system spit it out.

If not stopped, the flood of fake images of child sexual abuse could bog down investigators trying to rescue children who turn out to be virtual characters. Perpetrators could also use the images to groom and coerce new victims.

Sexton said IWF analysts discovered famous children's faces online, as well as a "massive demand for the creation of more images of children who have already been abused, possibly years ago."

"They are taking existing real content and using it to create new content for these victims," ​​he said. "That's incredibly shocking."

Sexton said her charity, which focuses on combating online child sexual abuse, began receiving reports of AI-generated abusive images earlier this year. That led to an investigation into the so-called dark web forums, a part of the Internet hosted within an encrypted network and accessible only through tools that provide anonymity.

What IWF analysts found were abusers sharing advice and marveling at how easy it was to turn their home computers into factories to generate sexually explicit images of children of all ages. Some also market and try to profit from these types of images that appear more and more realistic.

"What we're starting to see is this explosion of content," Sexton said.

While the IWF report aims to point out a growing problem rather than offer prescriptions, it calls on governments to strengthen laws to make it easier to combat AI-generated abuses. It is particularly aimed at the European Union, where there is debate over surveillance measures that could automatically scan messaging apps for suspected images of child sexual abuse, even if authorities are not previously aware of the images.

A big goal of the group's work is to prevent previous victims of sexual abuse from being abused again by redistributing their photographs.

The report says technology providers could do more to make it harder for the products they have created to be used in this way, although it is complicated by the fact that some of the tools are difficult to put back in the bottle.

Last year saw the introduction of a series of new AI image generators that surprised the public with their ability to conjure up whimsical or photorealistic images on command. But most of them are not favored by producers of child sexual abuse material because they contain mechanisms to block it.

Technology providers that have closed AI models, with full control over how they are trained and used (for example, OpenAI's DALL-E imager) appear to have been more successful in blocking misuse, Sexton said.

By contrast, a tool favored by producers of child sexual abuse images is the open-source Stable Diffusion, developed by London-based startup Stability AI. When Stable Diffusion burst onto the scene in the summer of 2022, a subset of users quickly learned how to use it to generate nudity and porn. While most of that material depicted adults, it was often non-consensual, such as when it was used to create celebrity-inspired nude photographs.

Stability subsequently implemented new filters that block unsafe and inappropriate content, and a license to use Stability software also includes a prohibition on illegal uses.

In a statement released Tuesday, the company said it “strictly prohibits any misuse for illegal or immoral purposes” on its platforms. "We strongly support law enforcement efforts against those who misuse our products for illegal or nefarious purposes," the statement read.

However, users can still access older, unfiltered versions of Stable Diffusion, which are "overwhelmingly the software of choice... for people who create explicit content involving children," said David Thiel, chief technologist at the Stanford Internet Observatory. , another watchdog group that studies the problem.

“You can't regulate what people do on their computers, in their bedrooms. “It’s not possible,” Sexton added. "So how do you get to the point where they can't use openly available software to create harmful content like this?"

Most AI-generated child sexual abuse images would be considered illegal under current laws in the US, UK and elsewhere, but it remains to be seen whether authorities have the tools to combat them.

The IWF report is timed ahead of a global meeting on AI safety next week hosted by the British government that will include high-profile attendees, including US Vice President Kamala Harris, and tech leaders.

“While this report paints a bleak picture, I am optimistic,” IWF chief executive Susie Hargreaves said in a prepared written statement. She said it's important to communicate the reality of the problem to "a wide audience because we need to discuss the darker side of this amazing technology."

Author Profile

Nathan Rivera
Allow me to introduce myself. I am Nathan Rivera, a dedicated journalist who has had the privilege of writing for the online newspaper Today90. My journey in the world of journalism has been a testament to the power of dedication, integrity, and passion.

My story began with a relentless thirst for knowledge and an innate curiosity about the events shaping our world. I graduated with honors in Investigative Journalism from a renowned university, laying the foundation for what would become a fulfilling career in the field.

What sets me apart is my unwavering commitment to uncovering the truth. I refuse to settle for superficial answers or preconceived narratives. Instead, I constantly challenge the status quo, delving deep into complex issues to reveal the reality beneath the surface. My dedication to investigative journalism has uncovered numerous scandals and shed light on issues others might prefer to ignore.

I am also a staunch advocate for press freedom. I have tirelessly fought to protect the rights of journalists and have faced significant challenges in my quest to inform the public truthfully and without constraints. My courage in defending these principles serves as an example to all who believe in the power of journalism to change the world.

Throughout my career, I have been honored with numerous awards and recognitions for my outstanding work in journalism. My investigations have changed policies, exposed corruption, and given a voice to those who had none. My commitment to truth and justice makes me a beacon of hope in a world where misinformation often prevails.

At Today90, I continue to be a driving force behind journalistic excellence. My tireless dedication to fair and accurate reporting is an invaluable asset to the editorial team. My biography is a living testament to the importance of journalism in our society and a reminder that a dedicated journalist can make a difference in the world.