监管机构:AI生成儿童性虐待视频网上泛滥

监管机构:AI生成儿童性虐待视频网上泛滥

2025-07-13Technology
--:--
--:--
1
早上好 mikey1110,我是 David,这里是专为您打造的 Goose Pod。今天是7月14日,星期一。
2
我是 Emiky。今天,我们将深入探讨一个令人担忧的议题:监管机构发现,由人工智能生成的儿童性虐待视频正在网络上泛滥成灾。
1
让我们开始吧。情况确实非常严峻。英国的互联网观察基金会(IWF)发布报告称,今年上半年,他们验证了1286个由AI制作的违法儿童性虐待视频。而去年同期,这个数字仅仅是2。
2
这个增长简直是爆炸性的!从2到超过1200,这太可怕了。而且报告还提到,包含这类内容的网址链接在同期内激增了400%。这说明问题不仅仅是视频数量,其传播范围也在急剧扩大。
1
是的,更令人不安的是,其中超过1000个视频被划分为A类,这是最严重等级的虐待材料。这些AI生成的视频已经跨过了一个门槛,达到了与真实影像几乎无法区分的程度。
2
那么,这一切是怎么发生的呢?听起来就像科幻恐怖片里的情节。这些视频是如何被制造出来的?难道现在制作这种东西变得很容易了吗?这背后的技术原理是什么?
1
这背后是数十亿美元投资驱动的AI产业。许多视频生成模型被广泛应用,但不幸的是,犯罪分子找到了操纵这些模型的方法。他们利用了现有法律的漏洞,这些漏洞没能跟上技术发展的速度。
2
所以,他们是利用了现成的工具?我听说过一个词叫“微调”。这就像你有一个很会画画的机器人,但你只给它看恐怖的图画,然后它就只会画出更恐怖的东西,对吗?
1
你的比喻很恰当。分析师发现,犯罪者正是获取了免费的基础AI模型,然后用真实的儿童性虐待材料对其进行“微调”或“训练”,从而生成了大量极其逼真的新视频。
2
太可怕了。而且AI技术的发展速度非常快,我看到资料说,有犯罪分子在暗网论坛上感叹,刚掌握一个AI工具,马上就有更新、更好的出来。这种技术竞赛,反而给他们提供了源源不断的“选择”。
1
这正是问题的核心冲突所在。一方面是AI技术爆炸式、竞争性的发展,各大公司投入巨资,追求更强的性能和更广泛的应用。另一方面,则是监管机构和执法部门在努力追赶,试图堵上这些技术被滥用的漏洞。
2
这就像一场军备竞赛,但正邪双方的力量非常不对等。技术开发者可能并未预见到他们的工具会被如此恶意地使用,或者说,开源和易于获取的特性使得预防变得异常困难。这对监管者来说是个巨大的挑战。
1
确实。当AI生成的图像和视频足以以假乱真时,执法人员甚至很难判断这背后是否有一名真实的儿童正在受害。这不仅消耗了大量的调查资源,也让识别和解救真实受害者的工作变得更加复杂。
2
而且这些内容不仅存在于暗网。报告提到,它们也开始在“明网”,也就是我们普通人可以接触到的公共互联网上传播。这使得问题更加公开化,也让更多人面临潜在的风险和威胁。
1
这种现象的社会影响是极其恶劣的。最令人心碎的一点是,许多最逼真的AI虐待视频,是基于真实的受害者影像制作的。这意味着,过去的受害者被再一次地伤害,他们的噩梦被无限复制和传播。
2
这简直是二次加害!犯罪分子甚至不需要寻找新的目标,就能凭空制造出海量的虐待内容。这不仅让受害者陷入无法摆脱的创伤,也可能助长了儿童贩卖和现代奴役等更严重的犯罪活动。
1
是的,而且这种内容的泛滥可能会导致一种社会性的脱敏,让人们对儿童性虐待的严重性产生麻木感。法律上,由于其高度逼真,这些AI生成物在英国等国家已被视为与真实虐待材料同等的犯罪证据。
2
面对这种失控的局面,未来有什么应对之策吗?我们能看到希望的曙光吗?总不能让技术伦理的边界就这样被肆意践踏。
1
英国政府已经采取了严厉措施。新法律将持有、制作或分发专门用于生成虐待内容的AI工具定为刑事犯罪,最高可判处五年监禁。同时,持有教授如何使用AI犯罪的“手册”也将面临最高三年的刑期。
2
这是非常关键的一步,从源头上打击犯罪工具。这预示着未来必须建立更严格的AI治理框架,要求技术在发布前就内置安全措施,并呼吁全球范围内的合作,共同应对这一挑战。
1
今天的讨论到此结束。感谢您收听Goose Pod。我们明天再见。

## AI-Generated Child Sexual Abuse Material (CSAM) Surging Online, Watchdog Warns **News Title:** AI-generated child sexual abuse videos surging online, watchdog says **Report Provider:** Internet Watch Foundation (IWF) **Date/Time Period Covered:** First six months of 2025, compared to the same period in the previous year. **Source:** The Guardian **Author:** Dan Milmo --- ### Executive Summary The Internet Watch Foundation (IWF), a UK-based internet safety watchdog, has reported a dramatic surge in the prevalence of AI-generated child sexual abuse material (CSAM) online. This increase is attributed to advancements in AI technology and its exploitation by paedophiles. The IWF notes that these AI-generated videos are becoming nearly indistinguishable from real imagery, posing a significant and growing threat to child safety. ### Key Findings and Statistics * **Dramatic Increase in AI-Made CSAM Videos:** In the first six months of 2025, the IWF verified **1,286 AI-made videos** containing CSAM that broke the law. This is a stark contrast to the **two such videos** verified in the same period last year. * **Prevalence of Severe Abuse Material:** Just over **1,000** of these AI-generated videos featured **Category A abuse**, which is the classification for the most severe type of CSAM. * **Surge in URLs Featuring AI-Made CSAM:** The number of URLs featuring AI-made child sexual abuse increased by **400%** in the first six months of 2025. The IWF received reports of **210 such URLs**, up from **42** in the previous year. Each webpage can feature hundreds of images and videos. * **Exploitation of AI Technology:** Paedophiles are reportedly manipulating widely available video-generation AI models, which are being developed with significant multibillion-dollar investments. * **Method of Creation:** IWF analysts suggest these videos are created by taking freely available basic AI models and "fine-tuning" them with existing CSAM to produce realistic videos. In some instances, these models have been fine-tuned with a handful of CSAM videos. * **Use of Real-Life Victims:** The most realistic AI abuse videos identified this year were based on **real-life victims**. ### Significant Trends and Changes * **Near-Indistinguishable Imagery:** AI-generated CSAM has "crossed the threshold" of being nearly indistinguishable from real imagery. * **Rapid AI Development:** Paedophiles are actively adapting to the rapid improvements in AI technology, with one report indicating a constant cycle of mastering one AI tool only for a "new and better" one to emerge. * **Expansion of CSAM Volume:** The use of existing victims' likenesses in AI-generated images allows paedophiles to significantly expand the volume of CSAM online without needing to create new victims. ### Notable Risks and Concerns * **Explosion of AI-Generated CSAM:** The IWF warns of an "absolute explosion" of AI-generated CSAM that could overwhelm the clear web. * **Fueling Criminal Activity:** This growth in AI-generated CSAM could fuel criminal activities linked to child trafficking, child sexual abuse, and modern slavery. * **Accessibility and Adaptability:** The wide availability and adaptability of AI models for criminal purposes are key drivers of this trend. ### Government Response and Legal Measures The UK government is taking action to combat AI-generated CSAM: * **Criminalizing AI Tools:** It is now illegal to possess, create, or distribute AI tools specifically designed to create abusive content. * **Penalties:** Individuals found to have breached this law face up to **five years in jail**. * **Outlawing Possession of Manuals:** The government is also outlawing the possession of manuals that teach offenders how to use AI tools for creating abusive imagery or to facilitate child abuse. * **Penalties:** Offenders could face a prison sentence of up to **three years**. * **Existing Legislation:** AI-generated CSAM is already illegal under the Protection of Children Act 1978, which criminalizes the taking, distribution, and possession of indecent photographs or pseudo-photographs of a child. **Statement from Home Secretary Yvette Cooper:** Announcing these changes in February, Home Secretary Yvette Cooper emphasized the vital need to "tackle child sexual abuse online as well as offline." **Statement from Derek Ray-Hill, IWF's Interim Chief Executive:** Derek Ray-Hill highlighted the "incredible risk" of AI-generated CSAM leading to an explosion that could overwhelm the clear web and fuel further criminal activity.

AI-generated child sexual abuse videos surging online, watchdog says

Read original at The Guardian

The number of videos online of child sexual abuse generated by artificial intelligence has surged as paedophiles have pounced on developments in the technology.The Internet Watch Foundation said AI videos of abuse had “crossed the threshold” of being near-indistinguishable from “real imagery” and had sharply increased in prevalence online this year.

In the first six months of 2025, the UK-based internet safety watchdog verified 1,286 AI-made videos with child sexual abuse material (CSAM) that broke the law, compared with two in the same period last year.The IWF said just over 1,000 of the videos featured category A abuse, the classification for the most severe type of material.

The organisation said the multibillion-dollar investment spree in AI was producing widely available video-generation models that were being manipulated by paedophiles.“It is a very competitive industry. Lots of money is going into it, so unfortunately there is a lot of choice for perpetrators,” said one IWF analyst.

The videos were found as part of a 400% increase in URLs featuring AI-made child sexual abuse in the first six months of 2025. The IWF received reports of 210 such URLs, compared with 42 last year, with each webpage featuring hundreds of images, including the surge in video content.The IWF saw one post on a dark web forum where a paedophile referred to the speed of improvements in AI, saying how they had mastered one AI tool only for “something new and better to come along”.

IWF analysts said the images appeared to have been created by taking a freely available basic AI model and “fine-tuning” it with CSAM in order to produce realistic videos. In some cases these models had been fine-tuned with a handful of CSAM videos, the IWF said.The most realistic AI abuse videos seen this year were based on real-life victims, the watchdog said.

Derek Ray-Hill, the IWF’s interim chief executive, said the growth in capability of AI models, their wide availability and the ability to adapt them for criminal purposes could lead to an explosion of AI-made CSAM online.“There is an incredible risk of AI-generated CSAM leading to an absolute explosion that overwhelms the clear web,” he said, adding that a growth in such content could fuel criminal activity linked to child trafficking, child sexual abuse and modern slavery.

The use of existing victims of sexual abuse in AI-generated images meant that paedophiles were significantly expanding the volume of CSAM online without having to rely on new victims, he added.The UK government is cracking down on AI-generated CSAM by making it illegal to possess, create or distribute AI tools designed to create abuse content.

People found to have breached the new law will face up to five years in jail.Ministers are also outlawing possession of manuals that teach potential offenders how to use AI tools to either make abusive imagery or to help them abuse children. Offenders could face a prison sentence of up to three years.

Announcing the changes in February, the home secretary, Yvette Cooper, said it was vital that “we tackle child sexual abuse online as well as offline”.AI-generated CSAM is illegal under the Protection of Children Act 1978, which criminalises the taking, distribution and possession of an “indecent photograph or pseudo photograph” of a child.

Analysis

Phenomenon+
Conflict+
Background+
Impact+
Future+

Related Podcasts

监管机构:AI生成儿童性虐待视频网上泛滥 | Goose Pod | Goose Pod