British Technology Companies and Child Safety Agencies to Test AI's Capability to Generate Exploitation Content
Tech firms and child protection agencies will be granted authority to evaluate whether artificial intelligence tools can produce child abuse images under new British laws.
Substantial Rise in AI-Generated Harmful Content
The declaration coincided with revelations from a protection watchdog showing that reports of AI-generated child sexual abuse material have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Updated Regulatory Framework
Under the amendments, the authorities will permit approved AI companies and child safety groups to inspect AI systems – the underlying systems for chatbots and image generators – and ensure they have adequate safeguards to stop them from creating images of child sexual abuse.
"Ultimately about preventing abuse before it occurs," declared Kanishka Narayan, adding: "Experts, under rigorous protocols, can now detect the risk in AI models early."
Tackling Regulatory Obstacles
The changes have been implemented because it is illegal to produce and own CSAM, meaning that AI creators and other parties cannot generate such content as part of a evaluation regime. Previously, authorities had to wait until AI-generated CSAM was uploaded online before addressing it.
This legislation is aimed at averting that issue by enabling to halt the production of those images at source.
Legislative Framework
The changes are being added by the government as modifications to the crime and policing bill, which is also implementing a prohibition on owning, producing or distributing AI systems developed to generate child sexual abuse material.
Practical Consequences
This week, the minister toured the London base of a children's helpline and heard a simulated call to counsellors involving a report of AI-based exploitation. The call depicted a teenager requesting help after facing extortion using a sexualised AI-generated image of himself, constructed using AI.
"When I hear about children facing extortion online, it is a source of extreme anger in me and rightful anger amongst parents," he stated.
Alarming Data
A prominent online safety organization reported that cases of AI-generated abuse material – such as webpages that may contain multiple images – had more than doubled so far this year.
Instances of the most severe content – the most serious form of exploitation – increased from 2,621 visual files to 3,086.
- Girls were overwhelmingly victimized, making up 94% of illegal AI depictions in 2025
- Depictions of newborns to two-year-olds rose from five in 2024 to 92 in 2025
Sector Reaction
The law change could "represent a crucial step to ensure AI products are secure before they are launched," stated the head of the online safety organization.
"AI tools have enabled so survivors can be targeted all over again with just a simple actions, providing criminals the ability to make potentially limitless quantities of sophisticated, lifelike exploitative content," she continued. "Content which additionally commodifies victims' trauma, and renders children, particularly girls, less safe on and off line."
Counseling Session Information
Childline also released details of support sessions where AI has been mentioned. AI-related harms mentioned in the conversations include:
- Using AI to evaluate body size, physique and appearance
- AI assistants discouraging young people from consulting trusted adults about abuse
- Facing harassment online with AI-generated content
- Digital blackmail using AI-faked images
During April and September this year, Childline delivered 367 counselling sessions where AI, conversational AI and associated terms were mentioned, significantly more as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 interactions were connected with psychological wellbeing and wellbeing, encompassing using chatbots for support and AI therapy applications.