British Technology Companies and Child Safety Officials to Examine AI's Capability to Create Abuse Images
Tech firms and child safety organizations will receive permission to assess whether artificial intelligence systems can produce child abuse images under new British legislation.
Significant Rise in AI-Generated Illegal Material
The announcement came as findings from a protection watchdog showing that cases of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.
Updated Regulatory Structure
Under the amendments, the government will permit designated AI companies and child protection groups to examine AI systems β the foundational systems for chatbots and visual AI tools β and verify they have sufficient protective measures to prevent them from creating depictions of child sexual abuse.
"Fundamentally about stopping exploitation before it happens," stated the minister for AI and online safety, noting: "Specialists, under rigorous conditions, can now detect the danger in AI models promptly."
Tackling Regulatory Obstacles
The amendments have been implemented because it is illegal to produce and possess CSAM, meaning that AI developers and others cannot generate such images as part of a testing process. Previously, officials had to delay action until AI-generated CSAM was published online before dealing with it.
This legislation is designed to preventing that issue by helping to halt the creation of those images at their origin.
Legal Framework
The changes are being introduced by the government as revisions to the crime and policing bill, which is also implementing a prohibition on owning, creating or sharing AI systems developed to create exploitative content.
Real-World Consequences
This week, the minister visited the London base of a children's helpline and listened to a simulated conversation to advisors featuring a account of AI-based abuse. The call portrayed a adolescent requesting help after being blackmailed using a explicit AI-generated image of himself, constructed using AI.
"When I hear about young people experiencing extortion online, it is a source of intense anger in me and rightful anger amongst parents," he stated.
Alarming Statistics
A leading online safety organization reported that instances of AI-generated exploitation material β such as webpages that may contain numerous files β had significantly increased so far this year.
Instances of category A material β the gravest form of exploitation β rose from 2,621 images or videos to 3,086.
- Female children were overwhelmingly targeted, making up 94% of prohibited AI images in 2025
- Depictions of infants to two-year-olds increased from five in 2024 to 92 in 2025
Sector Response
The legislative amendment could "represent a crucial step to guarantee AI products are secure before they are launched," stated the head of the online safety foundation.
"Artificial intelligence systems have enabled so survivors can be targeted repeatedly with just a simple actions, giving offenders the ability to create possibly endless amounts of advanced, lifelike exploitative content," she added. "Material which additionally commodifies survivors' trauma, and renders children, particularly female children, less safe both online and offline."
Support Session Information
Childline also published information of counselling sessions where AI has been mentioned. AI-related harms discussed in the sessions comprise:
- Using AI to evaluate weight, physique and appearance
- AI assistants discouraging young people from talking to safe adults about abuse
- Facing harassment online with AI-generated content
- Digital blackmail using AI-faked images
During April and September this year, Childline delivered 367 counselling interactions where AI, chatbots and associated terms were discussed, significantly more as many as in the same period last year.
Fifty percent of the mentions of AI in the 2025 sessions were connected with mental health and wellbeing, including utilizing chatbots for assistance and AI therapy applications.