Bad AI Pictures of Children

A2

Bad AI Pictures of Children

Introduction

The Internet Watch Foundation (IWF) has a new report. It says there are many more bad AI pictures of children now.

Main Body

More people use AI to make bad pictures and videos. In 2024, there were 193 bad AI pictures. In 2025, there were 491. Bad AI videos grew very fast. There were only 13 videos in 2024. In 2025, there were 3,443 videos. These AI pictures look real. They often use real children to make the images. This hurts real children. These bad pictures are on social media and other websites. The UK government has a law to stop this. But some people say the law is not strong enough. The IWF wants companies to make AI tools safe from the start.

Conclusion

There are too many bad AI pictures of children. The UK government wants to punish people more. The IWF wants better safety rules for AI.

Vocabulary Learning

law (n.)
law / rule / 法律
Example:The law says you must stop.
make (v.)
make / create / 製作
Example:She makes a cake every Sunday.
real (adj.)
real / actual / 真實
Example:The picture looks real.
safe (adj.)
safe / secure / 安全
Example:The playground is safe for children.
use (v.)
use / employ / 使用
Example:I use a pen to write.

Sentence Learning

In 2024, there were 193 bad AI pictures.
Time: The word 'In 2024' shows when the event happened.時間: 'In 2024' 表示事件發生的時間。
But some people say the law is not strong enough.
Connector: The word 'But' links two ideas and shows contrast.連接詞: 'But' 將兩個想法連接起來,表示對比。
These bad pictures are on social media and other websites.
Prepositional: The phrase 'on social media' shows location.介詞短語: 'on social media' 表示位置。
B2

Analysis of the Increase in AI-Generated Child Sexual Abuse Material

Introduction

The Internet Watch Foundation (IWF) has published an annual report showing a significant increase in the creation and sharing of AI-generated child sexual abuse material (CSAM) between 2024 and 2025.

Main Body

Data from the IWF shows that reports of realistic AI-generated CSAM rose from 193 in 2024 to 491 in 2025, which is an increase of over 150 percent. Even more concerning is the rise in video content, which jumped from 13 cases in 2024 to 3,443 in 2025. The IWF attributes this growth to the rise of 'nudifying' bots and the use of text-to-video and image-to-video technology. Furthermore, the organization noted that this content is appearing on mainstream social media ads and AI companion platforms, as well as on the dark web. From a technical perspective, the IWF claims that AI-generated images are more often classified as the most severe category (Category A) than non-AI content. However, 47 percent of criminal AI images from the last two years were placed in Category C. The foundation emphasized that these materials often use the physical features of real victims, either by modifying existing abuse content or using specific training data. As a result, this technology extends the harm caused to real children. Regarding laws and regulations, the Online Safety Act requires social media companies to find and remove CSAM. However, experts like Ian Russell have criticized the legislation, asserting that it is not ambitious enough to protect children. While the UK government plans to allow authorities to check AI models and make the possession of AI tools for creating CSAM illegal, the IWF argues that there is still a legal gap regarding safety testing before software is released. Consequently, the foundation is calling for a 'safety by design' requirement for all technology developers.

Conclusion

The current situation is defined by a rapid increase in the amount and quality of AI-generated CSAM. This has led the UK government to increase criminal penalties, while the IWF continues to push for mandatory safety standards across the industry.

Vocabulary Learning

ambitious (adj.)
ambitious / having a strong desire to achieve success雄心勃勃
Example:She is an ambitious entrepreneur who aims to launch a startup.
attribute (v.)
attribute / to ascribe or credit歸因;歸屬
Example:The researchers attribute the increase to new technology.
gap (n.)
gap / a space between two objects or a difference空隙;差距
Example:There is a gap between the two buildings.
mandatory (adj.)
mandatory / required by law or rule強制的;必須
Example:Attendance at the meeting is mandatory for all staff.
severity (n.)
severity / the great seriousness or harshness嚴重性;嚴厲程度
Example:The severity of the disease can vary between patients.

Sentence Learning

Data from the IWF shows that reports of realistic AI-generated CSAM rose from 193 in 2024 to 491 in 2025, which is an increase of over 150 percent.
Relative Clause: The clause 'which is an increase of over 150 percent' provides additional information about the rise in numbers, using the relative pronoun 'which' to refer back to the entire preceding clause.關係子句: 子句 'which is an increase of over 150 percent' 為前面整個句子提供額外資訊,使用關係代詞 'which' 指代前面的整個事件。
From a technical perspective, the IWF claims that AI-generated images are more often classified as the most severe category (Category A) than non-AI content.
Passive Voice: The verb phrase 'are classified' is in the passive form, indicating that the action of classifying is performed on the images by an unspecified agent.被動語態: 動詞片語 'are classified' 為被動語態,表示動作是對圖像施加的,施動者未被明確指出。
While the UK government plans to allow authorities to check AI models and make the possession of AI tools for creating CSAM illegal, the IWF argues that there is still a legal gap regarding safety testing before software is released.
Contrastive Conjunction: The word 'While' introduces a contrast between the government's plans and the IWF's concerns, showing that the two statements are related but differ in viewpoint.對照連詞: 'While' 連接兩個相對立的觀點,顯示政府的計劃與 IWF 的關切雖相關但立場不同。
As a result, this technology extends the harm caused to real children.
Causal Phrase: 'As a result' signals that the following clause explains the consequence of earlier information, linking cause and effect.因果短語: 'As a result' 表示後面的句子說明先前資訊的結果,連接因果關係。
C2

Analysis of the Proliferation of AI-Generated Child Sexual Abuse Material

Introduction

The Internet Watch Foundation (IWF) has released an annual report detailing a substantial increase in the production and distribution of AI-generated child sexual abuse imagery (CSAM) between 2024 and 2025.

Main Body

Quantitative data provided by the IWF indicates that reports of realistic AI-generated CSAM rose from 193 in 2024 to 491 in 2025, representing an increase exceeding 150 percent. A more pronounced escalation was observed in video content, where the number of instances rose from 13 in 2024 to 3,443 in 2025. This growth is attributed to the emergence of nudifying bots and the utilization of text-to-video and image-to-video technologies. The IWF further notes that such content is appearing on mainstream social media advertisements and AI companion platforms, distributed across both the clear and dark webs. From a technical and forensic perspective, the IWF observes that AI-generated imagery is more frequently classified as Category A (the most severe) than non-AI content, although 47 percent of criminal AI images from the last two years were categorized as Category C. The foundation asserts that these materials often incorporate the physiological characteristics of actual victims, either through direct modification of existing abuse content or via training data, thereby extending the harm to real children. Regarding the regulatory framework, the Online Safety Act, implemented in March of the previous year, mandates that social media entities identify and remove CSAM. However, stakeholders such as Ian Russell have expressed the view that the legislation lacks sufficient ambition to protect minors. While the UK government has proposed allowing designated authorities to scrutinize AI models and intends to criminalize the possession of AI tools and manuals designed for generating CSAM, the IWF maintains that a legal vacuum exists concerning pre-deployment safety testing. Consequently, the foundation advocates for a 'safety by design' mandate for technology developers.

Conclusion

The current situation is characterized by a rapid increase in the volume and sophistication of AI-generated CSAM, prompting the UK government to expand criminal penalties while the IWF continues to advocate for mandatory industry safety standards.

Vocabulary Learning

emergence (n.)
The process of coming into existence or becoming apparent出現
Example:The emergence of nudifying bots has accelerated the creation of illicit content.
forensic (adj.)
Relating to the application of scientific methods to investigate crimes法醫的
Example:Forensic analysis can trace the origins of AI‑generated images.
pre‑deployment (adj.)
Before the deployment or release of a system or product部署前的
Example:Pre‑deployment safety testing is crucial for new AI models.
proliferation (n.)
Rapid increase or spread of something, especially in large numbers擴散
Example:The proliferation of AI‑generated CSAM has alarmed regulators worldwide.
utilization (n.)
The act or process of using something effectively利用
Example:The utilization of text‑to‑video technologies enables realistic imagery.

Sentence Learning

The IWF further notes that such content is appearing on mainstream social media advertisements and AI companion platforms, distributed across both the clear and dark webs.
Reduced Relative Clause: This sentence contains a reduced relative clause 'distributed across both the clear and dark webs' that modifies 'content', omitting the relative pronoun and verb.簡化關係子句: 句中使用了簡化關係子句 'distributed across both the clear and dark webs' 修飾 'content',省略了關係代詞和動詞。
From a technical and forensic perspective, the IWF observes that AI-generated imagery is more frequently classified as Category A (the most severe) than non-AI content, although 47 percent of criminal AI images from the last two years were categorized as Category C.
Comparative Clause: The clause uses a comparative structure 'more frequently classified as Category A than non-AI content' to compare two classifications, and includes a concessive clause 'although 47 percent...'.比較結構: 句中使用比較結構 'more frequently classified as Category A than non-AI content' 對兩種分類進行比較,並包含讓步子句 'although 47 percent...'。
While the UK government has proposed allowing designated authorities to scrutinize AI models and intends to criminalize the possession of AI tools and manuals designed for generating CSAM, the IWF maintains that a legal vacuum exists concerning pre-deployment safety testing.
Complex Conditional: This sentence features a 'while' clause acting as a complex conditional, juxtaposing the government's proposals with the IWF's assertion.複合條件句: 句中使用 while 子句構成複合條件句,將政府的提議與 IWF 的主張並列。
This growth is attributed to the emergence of nudifying bots and the utilization of text-to-video and image-to-video technologies.
Nominalization: The nouns 'emergence' and 'utilization' are nominalizations of verbs, turning actions into abstract concepts.名詞化: 句中的 'emergence' 和 'utilization' 為動詞的名詞化,將動作轉為抽象概念。
Quantitative data provided by the IWF indicates that reports of realistic AI-generated CSAM rose from 193 in 2024 to 491 in 2025, representing an increase exceeding 150 percent.
Participial Phrase: The participial phrase 'representing an increase exceeding 150 percent' functions as an adjective modifying 'rose', providing additional information.分詞短語: 句中的分詞短語 'representing an increase exceeding 150 percent' 作為形容詞修飾 'rose',提供額外資訊。