British Technology Firms and Child Protection Agencies to Test AI's Ability to Generate Abuse Images
Technology companies and child safety agencies will be granted authority to assess whether AI systems can generate child exploitation material under recently introduced UK legislation.
Significant Increase in AI-Generated Illegal Content
The declaration came as findings from a safety monitoring body showing that reports of AI-generated CSAM have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
New Regulatory Framework
Under the amendments, the authorities will permit designated AI companies and child protection organizations to inspect AI models – the foundational systems for conversational AI and visual AI tools – and ensure they have adequate protective measures to stop them from producing depictions of child exploitation.
"Fundamentally about preventing abuse before it happens," declared the minister for AI and online safety, noting: "Experts, under rigorous protocols, can now identify the risk in AI systems promptly."
Tackling Legal Challenges
The amendments have been implemented because it is against the law to create and possess CSAM, meaning that AI developers and others cannot generate such content as part of a testing regime. Previously, officials had to delay action until AI-generated CSAM was uploaded online before addressing it.
This legislation is aimed at preventing that problem by enabling to stop the production of those materials at their origin.
Legislative Structure
The changes are being introduced by the authorities as modifications to the crime and policing bill, which is also implementing a prohibition on possessing, creating or distributing AI models developed to generate exploitative content.
Real-World Consequences
This recently, the official toured the London headquarters of Childline and listened to a simulated conversation to counsellors featuring a account of AI-based exploitation. The call portrayed a teenager requesting help after being blackmailed using a explicit deepfake of himself, constructed using AI.
"When I hear about children experiencing blackmail online, it is a source of extreme anger in me and rightful anger amongst families," he said.
Concerning Data
A prominent online safety foundation stated that instances of AI-generated exploitation material – such as online pages that may include numerous files – had more than doubled so far this year.
Cases of the most severe content – the gravest form of exploitation – increased from 2,621 visual files to 3,086.
- Girls were predominantly targeted, accounting for 94% of illegal AI images in 2025
- Depictions of newborns to toddlers increased from five in 2024 to 92 in 2025
Sector Response
The law change could "represent a vital step to ensure AI products are secure before they are launched," stated the head of the online safety organization.
"AI tools have made it so victims can be victimised repeatedly with just a simple actions, providing criminals the ability to create potentially endless amounts of sophisticated, lifelike child sexual abuse material," she added. "Content which additionally exploits victims' suffering, and makes children, especially girls, less safe both online and offline."
Support Session Data
The children's helpline also published details of counselling interactions where AI has been mentioned. AI-related harms discussed in the conversations include:
- Using AI to rate body size, physique and looks
- AI assistants discouraging young people from consulting safe adults about abuse
- Being bullied online with AI-generated material
- Digital blackmail using AI-manipulated images
Between April and September this year, Childline conducted 367 counselling sessions where AI, conversational AI and related topics were discussed, four times as many as in the equivalent timeframe last year.
Fifty percent of the mentions of AI in the 2025 sessions were connected with mental health and wellness, including using chatbots for assistance and AI therapy applications.