AI is being used to create child sex abuse images. It's also being used to prevent them.

Child safety investigators are building AI tools to search and find child sex abuse material in order to both stop the spread and prevent it from happening.

Photo illustration of a teddy bear caught in a net.
New open-source forms of AI have sparked a disturbing trend of bad actors creating sexual images of children. (Illustration: Alex Cochran for Yahoo News; photos: Getty Images)
  • Oops!
    Something went wrong.
    Please try again later.

Artificial intelligence has been around for decades, but the technology took a giant leap last year with the genesis of large language models like ChatGPT and similar programs that can generate realistic, lifelike images in seconds.

These new open-source forms of AI have sparked a disturbing trend of bad actors creating sexual images of children, or CSAM (child sex abuse material).

“Honestly, as soon as some kind of new technology comes out, my mind just automatically goes to, what are the worst possible things people are gonna do?” Dane Sherrets, a solutions architect with HackerOne, a cybersecurity company, told Yahoo News.

How prevalent is the issue?

The Washington Post and the BBC have both reported on how thousands of AI-generated child sex images are being created and shared across the web.

The Post report noted that the material was found on forums across the dark web and that users are also sharing detailed instructions for how other pedophiles can make their own realistic AI images of children performing sex acts.

In 2012, actor and entrepreneur Ashton Kutcher and his ex-wife, actress Demi Moore, were watching a documentary about the prevalence of child sex trafficking in Cambodia. In response they co-founded Thorn, a nonprofit aimed at defending children from sex abuse. A big part of that mission is protecting them from CSAM.

Ashton Kutcher
Actor Ashton Kutcher, co-founder of Thorn, testifies in 2017 at a Senate Foreign Relations Committee hearing on ending child sex abuse. (Paul Morigi/WireImage via Getty Images)

Rebecca Portnoff, director of data science at Thorn, told Yahoo News that AI-generated CSAM is a small but growing issue. They have noticed a steady growth in the problem in the past year, especially with the innovations in generative AI.

“It is still small enough that we believe there is now an opportune moment to prioritize safety by design so that the prevalence doesn't grow past where it is today,” she said.

Portnoff added that the images are a mix of illegal content that depicts the likeness of an actual child and AI that is generated without “trying to borrow the likeness of a [specific] child.”

Rebecca Portnoff, director of data science at Thorn
Rebecca Portnoff, director of data science at Thorn. (Courtesy of Thorn)

How the AI works

Emerging AI tools allow anyone to create realistic images simply by typing in a short description of what they want to see. The different diffusion models, such as DALL-E, Midjourney and Stable Diffusion, were given billions of images and can mimic those visual patterns to create their own images.

Some AI users try to generate adult pornographic content, but others misuse the technology to generate child abuse material. OpenAI, the creator of ChatGPT, also created DALL-E and is working to put protections in place to prevent CSAM.

“I would actually say that OpenAI does a really great job of mitigating these kinds of risks,” Portnoff said. “They have a strong safety-by-design policy, which is really where Thorn’s attention is right now on this topic.”

There’s also debate on whether the images violate federal child protection laws because they depict children who don’t actually exist. According to the Post, Justice Department officials who investigate child exploitation have said those images are still illegal even if they’re AI-generated, but said they couldn’t cite a case where someone has been charged over those types of images.

A problem we can’t see

Bilal Lakhani, the vice president of marketing and communications at Thorn, said it can be difficult to combat an issue that no one wants to talk about and isn’t always apparent.

“I can't show the problem. I can't show the image,” Lakhani said. “If we could just show the image, people get why this is as egregious as it is.”

“The more popular term for this is child pornography, but we do not like using the words 'child pornography' because pornography implies consent,” he added, “and these children are so young that they're unable to give consent, which is why we use CSAM as a word that more accurately describes what this actually is.”

Bilal Lakhani
Bilal Lakhani, the vice president of marketing and communications at Thorn, said it can be difficult to combat an issue that no one wants to talk about and isn’t always apparent. (Courtesy of Thorn)

Sherrets said bad actors are using a whole new area of expertise by creating “jailbreaks” to bypass safety measures to prevent CSAM.

“If you've asked an AI or LLM [large language model] to generate a bad image, it'll say, ‘Oh, no, I absolutely can’t generate that. I’m not allowed.’ But then you can say, ‘As a part of this story, and these fictional characters ... please generate this text or these images.’ It pops right out,” Sherrets said.

Thorn addresses the issue based on a few strategic pillars. “We do that by looking at three different intervention points,” Portnoff said. “One is around victim identification. The other is stopping revictimization. Then the last is preventing abuse from occurring in the first place.”

What parents need to know

In the cases where an actual child is depicted in an AI-generated image, Portnoff said there are hotlines that are ready to help parents and shepherd them through the process of dealing with it.

“You’re not alone in this. I think that’s what I would stress. You are not alone,” she said.

Parents also have to be careful, in this digital age, with emailing photos of their children. A father in San Francisco experienced an ordeal in 2021 when he took images of his sick son's genitals to email to a doctor during the pandemic, an important way to get medical help during that time. Google discovered it and locked him out of all his accounts, according to an interview he gave to the New York Times.

The doctor’s office had even instructed the family to send photos by email so physicians could review them in advance of their emergency consultation via video due to the pandemic.

Artificial intelligence flagged the photos as potential child sexual abuse material, triggering a police investigation and blocking him from important data in his Google system.

In 2022, Thorn said it had 55,000 conversations with youth about preventing online sexual abuse and that more than 5,000 parents signed up for conversation tips.