How AI Is Being Used to Create Child Sexual Abuse Material (CSAM)

Andy Hobdell headshot

Andy Hobdell

Partner

ai-csam-guide

Generative AI technology has created a deeply concerning new threat to child safety. These tools have enabled the creation of realistic synthetic child sexual abuse material (CSAM), dramatically increasing the volume of such content, lowering the technical barriers for offenders, and overwhelming law enforcement’s capacity to detect real victims and pursue cases. 

This article explains what AI-generated CSAM is, how technology is being misused, the harm these images cause, and what UK law says about AI and CSAM.

What is AI-generated CSAM? 

Child sexual abuse material (CSAM) refers to any visual depiction of sexually explicit conduct involving or depicting minors. AI-generated CSAM uses artificial intelligence to create imagery without directly photographing or filming a child being abused.

There are three main types:

  • Wholly synthetic images: Computer-generated images that don’t depict any specific real child but show realistic images of children in sexual situations.
  • Deepfakes: Images or videos where a real child’s face (often taken from social media, school photos, or family pictures) is digitally placed on sexual content. 
  • ‘Nudify’ or morphing tools: Applications that digitally remove clothing from ordinary, non-sexual photographs of children. 

Importantly, the UK legal system already treats these synthetic images as illegal CSAM, which criminalises their creation, possession, and distribution. It’s also illegal to possess, create, or distribute tools specifically designed to generate such content (punishable by up to five years in prison). 

How AI tools are being used to create CSAM

Image generation models and open-source tools

Advanced image generation models can be downloaded or accessed through web interfaces. Some online communities circulate modified versions specifically designed to generate inappropriate content involving children, often shared in hidden forums or encrypted channels. 

The ability to generate images locally and offline makes detection much more difficult for platforms and police. Users employ various techniques to bypass built-in safety filters, including specific text prompts, custom-trained models, and third-party modifications.

Deepfakes and face-swaps of real children 

Offenders are combining readily available photographs of real children – taken from social media, school websites, or family albums – with deepfake technology to create synthetic abuse imagery of specific, identifiable victims. 

This isn’t rare or isolated. There have been multiple high-profile incidents in schools where students use mobile apps to generate nude images of their classmates. These cases highlight how accessible this technology has become and how it’s being used to target real children in their own communities. 

‘Nudify’ apps and low-barrier abuse

Consumer-facing ‘nudify’ or ‘undress’ apps require minimal technical knowledge and are easily found online. Many of these apps fail to implement adequate age verification or blocking mechanisms for images of children, making them a significant vector for abuse.

These tools enable both peer-to-peer harassment among young people and adult offending against children. Their ease of use and perceived anonymity encourage experimentation by individuals who might never have created such content through traditional means. They also allow abusers to process multiple images quickly, scaling the harm.

Dark-web communities and tool-sharing 

Specialised online forums facilitate the trading of modified model files, prompt instructions, tutorials, and vast archives of AI-generated abuse imagery. One report found that, in just one month, over 3,500 AI CSAM images were shared in a single forum. 

These spaces also often mix AI-generated content with conventional CSAM, creating additional challenges for investigators who must determine which images depict real victims requiring urgent intervention and which are synthetic. This complicates case triage and diverts limited resources.

Why AI-generated CSAM is harmful

A common misconception is that AI-generated CSAM is victimless because no child was directly photographed. This is categorically false – AI-generated CSAM can be harmful in several ways.

Revictimisation of existing survivors

When images of real abuse victims are manipulated, altered, or placed into new synthetic abuse scenarios, it compounds the original trauma. Survivors often describe the horror of never being able to escape their abuse, knowing their images continue to circulate and be repurposed indefinitely online.

Harm to children in deepfakes

When a child’s ordinary photograph is transformed into sexual imagery without their knowledge or consent, it causes severe psychological harm. Victims experience bullying, social isolation, anxiety, shame, and lasting mental health impacts. For many, the violation of having their image sexualised is profoundly traumatic.

Pathways to offending and normalisation

Some research suggests that synthetic CSAM can lower barriers to offending. The material may desensitise users, reinforce sexual interest in children, and normalise deviant fantasies in online communities where such content is shared and discussed.

Many organisations have also reported an explosion in the volume of CSAM – both real and synthetic – that is overwhelming investigative capacity. This flood of material makes it harder to identify children currently being abused who need immediate protection and to prioritise the most urgent cases.

What UK law says about AI and CSAM

Under UK law, specifically the Protection of Children Act 1978 and the Criminal Justice Act 1988, it is illegal to create, possess, or distribute indecent images of children – and this extends to computer-generated images. The law doesn’t require that a real child be photographed. Images that appear to depict children in sexual situations are illegal, regardless of how they were created.

The Crown Prosecution Service (CPS) has made clear that AI-generated images depicting children in sexual situations will be prosecuted with the same seriousness as photographs of real child abuse. Being a ‘synthetic’ image is not a defence, and offenders face convictions, imprisonment (of up to five years), and mandatory registration on the sex offenders register.

How tech companies and police are responding to AI-generated CSAM

The Online Safety Act 2023 placed new obligations on online platforms to prevent the spread of CSAM, including synthetic material, with significant penalties for non-compliance.

With that, technology companies and law enforcement have been developing new tools to combat AI-generated CSAM. These include:

  • Detection systems specifically designed to identify synthetic imagery
  • Extended hash-matching databases that can flag known AI-generated content
  • AI classifiers that analyse images for markers of synthetic generation

However, significant gaps remain. Some AI platforms have been slow to implement safeguards or to report incidents to child protection organisations. Detection tools often struggle with false positives and must balance sensitivity with accuracy. There’s also a continuing ‘cat-and-mouse’ dynamic, where offender communities adapt quickly to new defensive measures.

Police face the additional challenge of distinguishing real victims from synthetic images when prioritising investigation, a task that becomes more difficult as AI generation becomes more sophisticated.

How Lawtons can help

If you or someone you know has been accused of offences involving AI-generated CSAM or related child sexual abuse material, the consequences can be severe and life-changing. These are very serious allegations in UK criminal law, and early legal representation is essential.

At Lawtons, our experienced team understands the complexities of cases involving synthetic imagery. We provide:

  • Confidential legal advice from the earliest stage of investigation 
  • Expert representation during police interviews
  • Thorough case analysis and defence preparation
  • Guidance through the entire criminal justice process 

If you need advice or representation, don’t hesitate to contact Lawtons today for a confidential consultation.

Related Articles