Analysis of the Proliferation of AI-Generated Child Sexual Abuse Material
Introduction
The Internet Watch Foundation (IWF) has released an annual report detailing a substantial increase in the production and distribution of AI-generated child sexual abuse imagery (CSAM) between 2024 and 2025.
Main Body
Quantitative data provided by the IWF indicates that reports of realistic AI-generated CSAM rose from 193 in 2024 to 491 in 2025, representing an increase exceeding 150 percent. A more pronounced escalation was observed in video content, where the number of instances rose from 13 in 2024 to 3,443 in 2025. This growth is attributed to the emergence of nudifying bots and the utilization of text-to-video and image-to-video technologies. The IWF further notes that such content is appearing on mainstream social media advertisements and AI companion platforms, distributed across both the clear and dark webs. From a technical and forensic perspective, the IWF observes that AI-generated imagery is more frequently classified as Category A (the most severe) than non-AI content, although 47 percent of criminal AI images from the last two years were categorized as Category C. The foundation asserts that these materials often incorporate the physiological characteristics of actual victims, either through direct modification of existing abuse content or via training data, thereby extending the harm to real children. Regarding the regulatory framework, the Online Safety Act, implemented in March of the previous year, mandates that social media entities identify and remove CSAM. However, stakeholders such as Ian Russell have expressed the view that the legislation lacks sufficient ambition to protect minors. While the UK government has proposed allowing designated authorities to scrutinize AI models and intends to criminalize the possession of AI tools and manuals designed for generating CSAM, the IWF maintains that a legal vacuum exists concerning pre-deployment safety testing. Consequently, the foundation advocates for a 'safety by design' mandate for technology developers.
Conclusion
The current situation is characterized by a rapid increase in the volume and sophistication of AI-generated CSAM, prompting the UK government to expand criminal penalties while the IWF continues to advocate for mandatory industry safety standards.