UK Technology Companies and Child Protection Officials to Examine AI's Ability to Generate Abuse Images

Technology companies and child safety organizations will receive authority to assess whether artificial intelligence tools can produce child abuse material under new British laws.

Significant Rise in AI-Generated Harmful Content

The declaration coincided with revelations from a safety monitoring body showing that reports of AI-generated CSAM have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.

Updated Legal Structure

Under the amendments, the authorities will permit approved AI developers and child safety organizations to inspect AI systems – the underlying systems for conversational AI and visual AI tools – and ensure they have adequate protective measures to prevent them from producing images of child exploitation.

"Fundamentally about preventing exploitation before it occurs," stated the minister for AI and online safety, adding: "Experts, under rigorous conditions, can now identify the danger in AI systems early."

Addressing Legal Challenges

The changes have been introduced because it is against the law to produce and own CSAM, meaning that AI developers and other parties cannot create such images as part of a evaluation regime. Until now, authorities had to wait until AI-generated CSAM was uploaded online before addressing it.

This law is aimed at averting that problem by enabling to stop the production of those images at source.

Legislative Structure

The changes are being introduced by the authorities as revisions to the crime and policing bill, which is also implementing a ban on possessing, creating or sharing AI systems designed to generate child sexual abuse material.

Real-World Consequences

This week, the minister toured the London base of a children's helpline and heard a mock-up conversation to advisors involving a report of AI-based abuse. The interaction portrayed a teenager seeking help after facing extortion using a sexualised AI-generated image of himself, constructed using AI.

"When I learn about young people experiencing extortion online, it is a source of extreme anger in me and justified concern amongst parents," he stated.

Concerning Data

A leading online safety foundation reported that cases of AI-generated exploitation content – such as online pages that may contain multiple images – had more than doubled so far this year.

Instances of category A material – the gravest form of exploitation – rose from 2,621 images or videos to 3,086.

  • Girls were predominantly victimized, accounting for 94% of illegal AI depictions in 2025
  • Depictions of newborns to toddlers rose from five in 2024 to 92 in 2025

Sector Reaction

The law change could "constitute a vital step to guarantee AI tools are safe before they are launched," stated the head of the internet monitoring organization.

"AI tools have enabled so victims can be victimised all over again with just a simple actions, providing criminals the ability to create possibly limitless amounts of sophisticated, lifelike exploitative content," she continued. "Content which additionally commodifies survivors' suffering, and renders children, particularly girls, more vulnerable both online and offline."

Support Interaction Information

Childline also released information of counselling sessions where AI has been mentioned. AI-related harms discussed in the sessions comprise:

  • Employing AI to evaluate body size, body and appearance
  • AI assistants dissuading young people from consulting safe adults about harm
  • Being bullied online with AI-generated material
  • Digital extortion using AI-manipulated pictures

During April and September this year, the helpline conducted 367 support sessions where AI, chatbots and related topics were mentioned, four times as many as in the same period last year.

Half of the mentions of AI in the 2025 interactions were related to mental health and wellbeing, encompassing using AI assistants for support and AI therapy apps.

Mr. Jose Johnson DVM
Mr. Jose Johnson DVM

Elara is a seasoned travel writer and luxury lifestyle expert, sharing insights from her global adventures and passion for sophisticated living.