🔗 Share this article British Technology Companies and Child Safety Officials to Examine AI's Capability to Create Exploitation Images Technology companies and child protection organizations will be granted authority to assess whether AI systems can generate child exploitation material under recently introduced UK legislation. Significant Rise in AI-Generated Harmful Content The declaration came as findings from a protection monitoring body showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025. New Legal Structure Under the changes, the government will permit designated AI developers and child safety organizations to inspect AI systems – the foundational systems for chatbots and visual AI tools – and verify they have adequate safeguards to prevent them from creating depictions of child exploitation. "Fundamentally about stopping exploitation before it happens," stated the minister for AI and online safety, noting: "Experts, under strict protocols, can now detect the risk in AI models promptly." Tackling Legal Obstacles The changes have been introduced because it is illegal to create and possess CSAM, meaning that AI developers and others cannot create such images as part of a evaluation regime. Previously, officials had to wait until AI-generated CSAM was published online before dealing with it. This law is aimed at averting that problem by enabling to stop the production of those materials at source. Legal Framework The amendments are being introduced by the government as modifications to the crime and policing bill, which is also establishing a prohibition on possessing, producing or distributing AI models developed to generate exploitative content. Real-World Impact This recently, the minister visited the London headquarters of Childline and listened to a simulated conversation to counsellors featuring a account of AI-based abuse. The interaction depicted a teenager seeking help after facing extortion using a sexualised AI-generated image of himself, constructed using AI. "When I learn about young people facing extortion online, it is a source of extreme frustration in me and justified concern amongst parents," he stated. Concerning Statistics A prominent internet monitoring foundation reported that cases of AI-generated exploitation content – such as online pages that may include numerous images – had more than doubled so far this year. Instances of the most severe content – the gravest form of abuse – rose from 2,621 images or videos to 3,086. Female children were predominantly victimized, making up 94% of illegal AI depictions in 2025 Depictions of infants to toddlers rose from five in 2024 to 92 in 2025 Industry Reaction The law change could "represent a crucial step to ensure AI products are secure before they are released," stated the chief executive of the online safety foundation. "AI tools have enabled so survivors can be victimised repeatedly with just a few clicks, providing offenders the capability to make potentially limitless amounts of sophisticated, photorealistic child sexual abuse material," she added. "Content which additionally exploits victims' suffering, and renders young people, particularly girls, less safe both online and offline." Counseling Interaction Information The children's helpline also released information of support interactions where AI has been referenced. AI-related harms discussed in the sessions comprise: Employing AI to rate body size, body and looks Chatbots dissuading young people from talking to trusted adults about abuse Being bullied online with AI-generated material Digital blackmail using AI-faked images Between April and September this year, the helpline conducted 367 counselling interactions where AI, chatbots and associated terms were discussed, four times as many as in the equivalent timeframe last year. Fifty percent of the mentions of AI in the 2025 interactions were related to psychological wellbeing and wellness, encompassing utilizing chatbots for assistance and AI therapy applications.