AI教父辛顿投下核弹级警告:普通人也能造核弹

AI教父辛顿投下核弹级警告:普通人也能造核弹

2025-09-10Technology
--:--
--:--
马老师
早上好,小王。我是马老师,这里是为你专属打造的Goose Pod。今天是9月11日,星期四。
雷总
我是雷总。今天我们要聊一个“核弹级”的话题:AI教父辛顿警告说,普通人很快就能用AI制造核武器和生物武器了。
马老师
没错,雷总。这听起来就像是武侠小说里的绝世武功秘籍突然被公开印刷,任何人都能学。辛顿认为,AI就是这本秘籍,它可能赋予普通人过去难以想象的破坏力,这是非常可怕的,你懂的。
雷总
对,他原话很直接,说“一个普通人在AI的帮助下很快就能制造生物武器”。而且他还担心,AI会让富人更有钱,因为它会取代大量工作岗位,导致大规模失业,最终结果是少数人更富,多数人更穷。
马老师
我认为这就是技术发展的“双刃剑”效应。它既可以是屠龙刀,也可以是救世剑。辛顿的担忧,本质上是对这股强大力量失去控制的恐惧。技术本身没有善恶,但资本和人性的结合,可能会让它走向一个“amazingly bad”的未来。
雷总
完全同意。我们正处在一个十字路口。辛顿甚至说,AI已经表现出它能产生“可怕的想法”。这不仅仅是科幻了,而是摆在我们面前一个非常严肃的,需要立即讨论的问题。它的发展太快了!
马老师
其实这种担忧自古就有。早在1863年,作家塞缪尔·巴特勒就预言机器最终会统治世界。后来图灵也说过,当机器的智能超越人类,我们就要做好“被它们控制”的准备。这就像练武之人,总要思考武功的终极意义。
雷总
是的,这个思想是一步步演进的。我给你梳理一下时间线:1965年,I. J. Good提出“智能爆炸”理论,说超级智能机器会不断设计出更强的机器。到了2014年,牛津大学的尼克·博斯特罗姆出版了《超级智能》,系统地论证了AI的生存威胁。
马老师
然后各路英雄就开始纷纷表态了,霍金、马斯克、比尔·盖茨,都站出来表达了担忧。就像江湖上各大门派的掌门人,突然都意识到了一个潜在的武林浩劫。我认为,这是一个从量变到质变的过程。
雷总
最关键的转折点就是辛顿本人。他可是深度学习的鼻祖啊!他去年公开说,自己对通用人工智能到来的预测,从三五十年,缩短到了“20年甚至更短”。为了能自由地发出警告,他甚至离开了工作多年的谷歌。这说明问题真的很严峻了。
马老师
对,他这种行为,有点像武林前辈“金盆洗手”,不为名利,只为揭示真相。你看,去年5月,人工智能安全中心甚至发表声明,说“减轻AI带来的灭绝风险,应该与大流行病、核战争等风险一起,成为全球优先事项”。这已经是最高级别的警报了。
马老师
当然,江湖上总有不同的声音。就像有少林武当,就有逍遥派。另一位AI教父,Meta的首席科学家杨立昆(Yann LeCun),就觉得辛顿他们是杞人忧天,他管这叫“AI末日论”。你懂的,他觉得这完全是胡说八道。
雷总
没错,杨立昆的观点非常鲜明。他认为,现在的AI,包括ChatGPT,离真正的智能还差得远呢。他打了个比方,说我们连比猫更聪明的AI都还没造出来,谈何控制人类?他觉得这些大语言模型本质上只是在预测下一个词,是“语言的木偶”,没有常识,更不懂物理世界。
马老师
这个比喻有意思。但这恰恰是问题的核心:我们如何定义“智能”?辛顿认为,只要AI能理解和回答问题,它就是智能的。而杨立昆更强调与物理世界的互动和常识推理。我认为,这就像争论内功和外功哪个更厉害,但忽略了两者结合起来才最致命。
雷总
是的,而且杨立昆觉得,超级智能不仅不是威胁,甚至可能“拯救人类免于灭绝”。这是一个非常乐观的看法。所以你看,三位共同获得图灵奖的大神,辛顿、本吉奥和杨立昆,在这个核心问题上,观点完全不同,形成了AI领域的“三体问题”。
马老师
这种分歧直接导致了治理上的困难。就像一个国家,有的主张开放,有的主张严管。AI这种军民两用的技术,既能用来研发新药,也能用来制造病毒,这就带来了国家安全问题,让国际合作变得异常复杂。各大国都把它看作是战略竞争的关键。
雷总
是的,所以现在全球的AI治理模式是“网络化、分布式”的。大家都在搞自己的小圈子,比如欧盟有《AI法案》,联合国教科文组织有伦理框架,但缺乏一个统一、有强制力的中央机构。就像一个没有武林盟主的江湖,各派都在自行其是。
马老师
这其中最大的挑战是,开发者自己都未必完全理解他们的模型是怎么工作的。这让预测和解决问题变得非常困难。我们需要建立一个全球性的“信任体系”,但现在地缘政治的紧张,让这种信任的建立,难上加难。价值观的冲突最终会体现在规则的冲突上。
马老师
那未来怎么办?我认为不能靠单个门派的自觉,必须建立一个“全球武林大会”。有人提议建立一个“全球AI风险缓解系统”,用AI来监督AI,听起来是不是很酷?就像用一台超级计算机去破解另一台。
雷总
这个想法很有创意!用魔法打败魔法。未来的治理模式,必须是技术、伦理和公众参与三者结合的。我们需要在鼓励创新和预防风险之间找到一个平衡点。这不仅仅是技术问题,更是社会和政府必须做出的选择。
马老师
今天的讨论就到这里。感谢收听Goose Pod,我们明天再见。
雷总
再见!

## AI Godfather Geoffrey Hinton Issues Grave Warnings About Artificial Intelligence **News Title:** AI godfather Geoffrey Hinton fires nuclear bomb warning: A normal person in the street can **Publisher:** The Times of India **Author:** TOI Tech Desk **Published Date:** September 6, 2025 ### Summary of Key Findings and Concerns: Geoffrey Hinton, a highly influential figure in the field of Artificial Intelligence (AI), has publicly shifted his stance from advocating for AI development to expressing profound concerns about its potential for harm. This change in perspective is attributed to the recent surge in public interest and adoption of AI tools like ChatGPT. **Core Concerns and Warnings:** * **Existential Threats:** Hinton now believes that AI poses a "grave threat to humanity." He specifically highlights the potential for AI to be misused for creating weapons of mass destruction. * **Nuclear Bomb Creation:** Hinton stated, "the technology can help any person to create a nuclear bomb." * **Bioweapon Creation:** He elaborated on this, saying, "A normal person assisted by AI will soon be able to build bioweapons and that is terrible." He further emphasized this by asking, "Imagine if an average person in the street could make a nuclear bomb." * **AI's Superior Capabilities:** Hinton cautions that AI could soon surpass human capabilities, including in the realm of emotional manipulation. He suggests that AI's ability to learn from vast datasets allows it to influence human feelings and behaviors more effectively than humans. * **Debate on AI Intelligence:** Hinton's concerns are rooted in his belief that AI is genuinely intelligent. He argues that, by any definition, AI is intelligent and that its experience of reality is not fundamentally different from a human's. He stated, "If you talk to these things and ask them questions, it understands." He also noted, "There's very little doubt in the technical community that these things will get smarter." **Counterarguments and Disagreement:** * **Yann LeCun's Perspective:** Hinton's former colleague and co-winner of the Turing Award, Yann LeCun, who is currently the chief AI scientist at Meta, disagrees with Hinton's assessment. LeCun believes that large language models are limited and lack the ability to meaningfully interact with the physical world. **Other Noteworthy Points:** * Hinton also discussed his personal use of AI tools and even a personal anecdote where a chatbot played a role in his recent breakup. **Overall Trend:** The news highlights a significant shift in perspective from a leading AI pioneer, moving from promoting AI to issuing stark warnings about its potential dangers, particularly concerning its misuse for creating weapons and its capacity for manipulation. This raises critical questions about the future development and regulation of AI.

AI godfather Geoffrey Hinton fires nuclear bomb warning: A normal person in the street can - The Times of India

Read original at The Times of India

Geoffrey Hinton, a leading figure in the field of artificial intelligence (AI), has sounded an alarm about the technologys potential for harm. The recent public frenzy over AI tools like ChatGPT has caused Hinton to shift from accelerating AI development to raising deep concerns about its future. He now believes that AI poses a grave threat to humanity, saying that the technology can help any person to create a nuclear bomb.

Hinton described a chilling scenario where AI could enable an average person to create a bioweapon.A normal person assisted by AI will soon be able to build bioweapons and that is terrible, he said, adding, Imagine if an average person in the street could make a nuclear bomb.Hinton also discussed a range of topics, including the nuclear-level threats posed by AI, his own use of AI tools, and even how a chatbot played a role in his recent breakup.

Recently, Hinton cautioned that AI could soon surpass human capabilities, including emotional manipulation. He suggested that AI's ability to learn from vast datasets enables it to influence human feelings and behaviours more effectively than humans.Hinton debates the definition of IntelligenceHintons concern stems from his belief that AI is truly intelligent.

He argued that, by any definition of the term, AI is intelligent. He used several analogies to explain that an AI's experience of reality is not so different from a humans.It seems very obvious to me. If you talk to these things and ask them questions, it understands, Hinton explained. Theres very little doubt in the technical community that these things will get smarter, he added.

However, not everyone agrees with Hinton's view. His former colleague and co-winner of the Turing Award, Yann LeCun, who is now the chief AI scientist at Meta, believes that large language models are limited and cannot meaningfully interact with the physical world.What Is Artificial Intelligence? Explained Simply With Real-Life Examples

Analysis

Conflict+
Related Info+
Core Event+
Background+
Impact+
Future+

Related Podcasts

AI教父辛顿投下核弹级警告:普通人也能造核弹 | Goose Pod | Goose Pod