British Tech Companies and Child Safety Officials to Test AI's Ability to Generate Abuse Content
Technology companies and child protection agencies will be granted authority to assess whether AI systems can generate child abuse images under new British legislation.
Substantial Increase in AI-Generated Illegal Material
The announcement coincided with findings from a safety monitoring body showing that reports of AI-generated CSAM have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.
New Legal Framework
Under the amendments, the government will allow approved AI companies and child safety groups to inspect AI systems – the foundational technology for conversational AI and visual AI tools – and ensure they have sufficient safeguards to prevent them from creating depictions of child sexual abuse.
"Ultimately about preventing abuse before it occurs," declared Kanishka Narayan, adding: "Specialists, under strict conditions, can now detect the risk in AI models promptly."
Tackling Regulatory Challenges
The amendments have been implemented because it is against the law to create and possess CSAM, meaning that AI developers and others cannot create such content as part of a testing regime. Previously, officials had to wait until AI-generated CSAM was published online before addressing it.
This law is aimed at preventing that problem by enabling to stop the production of those images at source.
Legislative Framework
The amendments are being introduced by the authorities as revisions to the crime and policing bill, which is also implementing a ban on owning, creating or sharing AI systems designed to create exploitative content.
Real-World Consequences
This week, the official visited the London headquarters of a children's helpline and heard a simulated call to counsellors involving a report of AI-based exploitation. The interaction depicted a adolescent requesting help after being blackmailed using a explicit AI-generated image of themselves, constructed using AI.
"When I learn about young people experiencing extortion online, it is a cause of extreme frustration in me and rightful anger amongst families," he stated.
Alarming Statistics
A prominent internet monitoring organization reported that cases of AI-generated exploitation material – such as online pages that may contain numerous files – had significantly increased so far this year.
Instances of category A content – the most serious form of abuse – increased from 2,621 images or videos to 3,086.
- Female children were predominantly targeted, making up 94% of illegal AI depictions in 2025
- Depictions of newborns to toddlers rose from five in 2024 to 92 in 2025
Sector Reaction
The legislative amendment could "constitute a crucial step to ensure AI products are safe before they are released," commented the head of the internet monitoring organization.
"AI tools have enabled so victims can be victimised repeatedly with just a few clicks, providing criminals the ability to create possibly endless quantities of advanced, lifelike child sexual abuse material," she added. "Material which further commodifies victims' suffering, and renders children, especially girls, more vulnerable both online and offline."
Counseling Session Information
Childline also published details of support interactions where AI has been referenced. AI-related risks mentioned in the sessions comprise:
- Using AI to rate body size, physique and appearance
- Chatbots discouraging young people from talking to safe guardians about abuse
- Being bullied online with AI-generated content
- Digital blackmail using AI-manipulated images
During April and September this year, the helpline conducted 367 support interactions where AI, conversational AI and associated terms were mentioned, four times as many as in the same period last year.
Fifty percent of the mentions of AI in the 2025 interactions were connected with mental health and wellness, encompassing utilizing chatbots for support and AI therapeutic applications.