Bad AI Pictures of Children
Bad AI Pictures of Children
Introduction
The Internet Watch Foundation (IWF) has a new report. It says there are many more bad AI pictures of children now.
Main Body
More people use AI to make bad pictures and videos. In 2024, there were 193 bad AI pictures. In 2025, there were 491. Bad AI videos grew very fast. There were only 13 videos in 2024. In 2025, there were 3,443 videos. These AI pictures look real. They often use real children to make the images. This hurts real children. These bad pictures are on social media and other websites. The UK government has a law to stop this. But some people say the law is not strong enough. The IWF wants companies to make AI tools safe from the start.
Conclusion
There are too many bad AI pictures of children. The UK government wants to punish people more. The IWF wants better safety rules for AI.
Vocabulary Learning
Sentence Learning
Analysis of the Increase in AI-Generated Child Sexual Abuse Material
Introduction
The Internet Watch Foundation (IWF) has published an annual report showing a significant increase in the creation and sharing of AI-generated child sexual abuse material (CSAM) between 2024 and 2025.
Main Body
Data from the IWF shows that reports of realistic AI-generated CSAM rose from 193 in 2024 to 491 in 2025, which is an increase of over 150 percent. Even more concerning is the rise in video content, which jumped from 13 cases in 2024 to 3,443 in 2025. The IWF attributes this growth to the rise of 'nudifying' bots and the use of text-to-video and image-to-video technology. Furthermore, the organization noted that this content is appearing on mainstream social media ads and AI companion platforms, as well as on the dark web. From a technical perspective, the IWF claims that AI-generated images are more often classified as the most severe category (Category A) than non-AI content. However, 47 percent of criminal AI images from the last two years were placed in Category C. The foundation emphasized that these materials often use the physical features of real victims, either by modifying existing abuse content or using specific training data. As a result, this technology extends the harm caused to real children. Regarding laws and regulations, the Online Safety Act requires social media companies to find and remove CSAM. However, experts like Ian Russell have criticized the legislation, asserting that it is not ambitious enough to protect children. While the UK government plans to allow authorities to check AI models and make the possession of AI tools for creating CSAM illegal, the IWF argues that there is still a legal gap regarding safety testing before software is released. Consequently, the foundation is calling for a 'safety by design' requirement for all technology developers.
Conclusion
The current situation is defined by a rapid increase in the amount and quality of AI-generated CSAM. This has led the UK government to increase criminal penalties, while the IWF continues to push for mandatory safety standards across the industry.
Vocabulary Learning
Sentence Learning
Analysis of the Proliferation of AI-Generated Child Sexual Abuse Material
Introduction
The Internet Watch Foundation (IWF) has released an annual report detailing a substantial increase in the production and distribution of AI-generated child sexual abuse imagery (CSAM) between 2024 and 2025.
Main Body
Quantitative data provided by the IWF indicates that reports of realistic AI-generated CSAM rose from 193 in 2024 to 491 in 2025, representing an increase exceeding 150 percent. A more pronounced escalation was observed in video content, where the number of instances rose from 13 in 2024 to 3,443 in 2025. This growth is attributed to the emergence of nudifying bots and the utilization of text-to-video and image-to-video technologies. The IWF further notes that such content is appearing on mainstream social media advertisements and AI companion platforms, distributed across both the clear and dark webs. From a technical and forensic perspective, the IWF observes that AI-generated imagery is more frequently classified as Category A (the most severe) than non-AI content, although 47 percent of criminal AI images from the last two years were categorized as Category C. The foundation asserts that these materials often incorporate the physiological characteristics of actual victims, either through direct modification of existing abuse content or via training data, thereby extending the harm to real children. Regarding the regulatory framework, the Online Safety Act, implemented in March of the previous year, mandates that social media entities identify and remove CSAM. However, stakeholders such as Ian Russell have expressed the view that the legislation lacks sufficient ambition to protect minors. While the UK government has proposed allowing designated authorities to scrutinize AI models and intends to criminalize the possession of AI tools and manuals designed for generating CSAM, the IWF maintains that a legal vacuum exists concerning pre-deployment safety testing. Consequently, the foundation advocates for a 'safety by design' mandate for technology developers.
Conclusion
The current situation is characterized by a rapid increase in the volume and sophistication of AI-generated CSAM, prompting the UK government to expand criminal penalties while the IWF continues to advocate for mandatory industry safety standards.