UK Tech Firms and Child Protection Officials to Test AI's Capability to Generate Exploitation Content
Tech firms and child protection organizations will receive authority to assess whether artificial intelligence tools can produce child exploitation images under recently introduced British legislation.
Substantial Rise in AI-Generated Illegal Material
The declaration came as revelations from a protection watchdog showing that reports of AI-generated CSAM have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.
Updated Legal Structure
Under the amendments, the government will allow approved AI developers and child safety organizations to inspect AI models – the foundational technology for chatbots and visual AI tools – and verify they have adequate safeguards to prevent them from producing images of child sexual abuse.
"Ultimately about stopping exploitation before it happens," declared the minister for AI and online safety, noting: "Experts, under rigorous protocols, can now detect the risk in AI models promptly."
Addressing Regulatory Challenges
The changes have been introduced because it is against the law to produce and possess CSAM, meaning that AI developers and others cannot create such content as part of a testing regime. Previously, authorities had to delay action until AI-generated CSAM was uploaded online before dealing with it.
This law is designed to preventing that issue by enabling to stop the creation of those materials at their origin.
Legislative Framework
The amendments are being added by the authorities as modifications to the criminal justice legislation, which is also establishing a prohibition on possessing, creating or distributing AI systems developed to create exploitative content.
Practical Impact
This recently, the minister toured the London base of a children's helpline and listened to a simulated call to advisors involving a account of AI-based exploitation. The interaction depicted a teenager requesting help after being blackmailed using a explicit AI-generated image of himself, created using AI.
"When I learn about children facing blackmail online, it is a cause of extreme frustration in me and justified concern amongst parents," he stated.
Alarming Statistics
A prominent internet monitoring foundation stated that cases of AI-generated exploitation material – such as webpages that may contain multiple images – had significantly increased so far this year.
Cases of the most severe material – the most serious form of exploitation – rose from 2,621 images or videos to 3,086.
- Female children were overwhelmingly targeted, making up 94% of prohibited AI depictions in 2025
- Portrayals of infants to toddlers rose from five in 2024 to 92 in 2025
Industry Reaction
The legislative amendment could "constitute a crucial step to guarantee AI products are safe before they are launched," commented the head of the online safety organization.
"Artificial intelligence systems have enabled so victims can be targeted all over again with just a simple actions, providing offenders the capability to create potentially limitless amounts of sophisticated, lifelike child sexual abuse material," she continued. "Content which further commodifies survivors' suffering, and renders children, particularly female children, less safe both online and offline."
Counseling Session Information
Childline also published details of support interactions where AI has been referenced. AI-related harms discussed in the sessions include:
- Using AI to rate weight, physique and looks
- Chatbots dissuading children from consulting safe adults about harm
- Being bullied online with AI-generated material
- Digital extortion using AI-faked images
Between April and September this year, the helpline delivered 367 support sessions where AI, conversational AI and related topics were mentioned, four times as many as in the same period last year.
Half of the references of AI in the 2025 interactions were connected with mental health and wellness, including using AI assistants for support and AI therapeutic applications.