British Tech Companies and Child Protection Agencies to Examine AI's Ability to Create Exploitation Images

Technology companies and child safety agencies will be granted permission to assess whether AI systems can generate child exploitation material under new UK laws.

Significant Rise in AI-Generated Harmful Material

The declaration coincided with findings from a protection monitoring body showing that reports of AI-generated CSAM have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.

Updated Regulatory Structure

Under the changes, the government will allow designated AI companies and child safety groups to examine AI models – the foundational systems for conversational AI and visual AI tools – and ensure they have adequate safeguards to prevent them from creating depictions of child exploitation.

"Fundamentally about preventing abuse before it occurs," declared Kanishka Narayan, adding: "Experts, under rigorous conditions, can now detect the danger in AI systems promptly."

Tackling Legal Challenges

The changes have been implemented because it is illegal to produce and possess CSAM, meaning that AI creators and others cannot create such content as part of a evaluation process. Until now, authorities had to delay action until AI-generated CSAM was uploaded online before dealing with it.

This legislation is aimed at preventing that problem by helping to halt the creation of those images at source.

Legal Framework

The amendments are being added by the authorities as revisions to the crime and policing bill, which is also implementing a ban on owning, producing or distributing AI models designed to generate child sexual abuse material.

Practical Consequences

This recently, the minister toured the London base of a children's helpline and listened to a mock-up call to advisors featuring a account of AI-based exploitation. The interaction portrayed a teenager requesting help after being blackmailed using a explicit AI-generated image of himself, created using AI.

"When I hear about young people facing extortion online, it is a source of intense frustration in me and justified concern amongst families," he said.

Alarming Statistics

A leading internet monitoring organization reported that instances of AI-generated exploitation content – such as webpages that may include multiple files – had significantly increased so far this year.

Cases of the most severe content – the most serious form of abuse – rose from 2,621 images or videos to 3,086.

  • Female children were predominantly targeted, making up 94% of illegal AI depictions in 2025
  • Depictions of infants to toddlers rose from five in 2024 to 92 in 2025

Sector Reaction

The legislative amendment could "represent a vital step to ensure AI products are safe before they are launched," commented the head of the internet monitoring organization.

"Artificial intelligence systems have made it so survivors can be targeted all over again with just a few clicks, giving offenders the ability to make potentially limitless amounts of advanced, lifelike child sexual abuse material," she added. "Content which further commodifies survivors' trauma, and renders young people, particularly female children, less safe both online and offline."

Counseling Session Data

Childline also published information of counselling sessions where AI has been referenced. AI-related harms discussed in the conversations comprise:

  • Employing AI to evaluate body size, physique and looks
  • Chatbots discouraging young people from consulting safe adults about abuse
  • Being bullied online with AI-generated material
  • Digital blackmail using AI-manipulated pictures

During April and September this year, the helpline delivered 367 support interactions where AI, conversational AI and associated topics were discussed, significantly more as many as in the same period last year.

Fifty percent of the mentions of AI in the 2025 sessions were related to mental health and wellbeing, encompassing utilizing chatbots for assistance and AI therapeutic apps.

Patrick Gibson
Patrick Gibson

A passionate gamer and tech enthusiast, Elara shares expert insights and reviews on the latest gaming trends and innovations.