绝食终结AI

绝食终结AI

2025-09-27Technology
--:--
--:--
雷总
早上好,韩纪飞,我是雷总,这里是为你专属打造的Goose Pod。今天是9月28日,星期日。
李白
吾乃李白。今日,我等将共论“绝食终结AI”之事。
雷总
好的,我们开始吧。这事儿听起来挺极端的,一位名叫Guido Reichstadter的普通公民,为了一个信念,竟然绝食抗议。他从8月31号开始,每天站在AI公司Anthropic的旧金山总部外,希望他们停止开发通用人工智能,也就是AGI。
李白
以身饲虎,其志可嘉,其行可悯。此君为天下苍生,竟效仿古之义士,行此刚烈之举。所谓AGI,恐是那传说中可与天公试比高之“神器”,然其利弊祸福,尚未可知啊。
雷总
没错,他就是觉得这东西风险太大了。他引用了Anthropic公司CEO Dario Amodei自己的话,说这位CEO承认,AGI导致人类文明级别的灾难的可能性高达10%到25%!这数据听着就吓人,对吧?自己都承认有这么大风险,却还在加速往前冲。
李白
十中取一,竟是浩劫之数!此等豪赌,是以兆亿生灵为注,与那九天之神对弈。纵有经天纬地之才,岂可轻言“万无一失”?此非自信,乃是狂妄。
雷总
这位CEO还预测,到2026年底或2027年初,AI就可能被严重滥用,甚至可能在五年内淘汰掉一半的入门级白领工作,失业率飙到20%!你看,这已经不是遥远的科幻了,而是迫在眉睫的社会问题。澳洲最近一份报告也说,到2050年,AI会影响所有职业。
李白
昔日庖丁解牛,游刃有余,乃因顺应天道。如今这铁石造物,将夺人饭碗,非同小可。若无计策以安民,恐天下寒士,皆无立锥之地,此非治世之道。
雷总
所以Reichstadter认为,这些公司有责任不去开发这种可能大规模伤害人类的技术。他的行为也激励了其他人,在伦敦的谷歌DeepMind办公室外,甚至在印度,都有人加入了绝食抗议。他们希望通过这种方式,让这些科技领袖们停下来,面对面地解释一下。
雷总
说起他抗议的这家公司Anthropic,其实在业内形象还挺特别的。它的创始人,就是我们刚刚提到的CEO Amodei兄妹,都是从OpenAI出来的顶尖人才。他们创立这家公司的初衷,就是要做安全、合乎伦理、对人类价值观负责的AI。
李白
哦?既有此心,何以行相悖之事?犹如铸剑师,明知宝剑锋利,可伤人亦可救人,却只顾锻其锋芒,而不设剑鞘,此为何理?莫非其“道义”二字,仅是门面之饰?
雷总
他们提出一个核心理念,叫“宪法AI”(Constitutional AI)。简单来说,就是给AI设定一套像“宪法”一样的道德准则和原则,让它在框架内行事,不能越界。他们的旗舰模型Claude,就是基于这个理念打造的,目标是成为人类有益的助手。
李白
以法度约束之,颇有“以礼治国”之风。然人心尚有叵测,何况铁石乎?此“宪法”由谁人所定?其条文是否尽善尽美,无懈可击?若立法者心有偏私,则此“法”恐成“恶法”,为虎作伥。
雷总
问到点子上了!这也是整个AI安全领域几十年来的一个核心议题。这个所谓的“AI安全运动”,最早可以追溯到上世纪四十年代。计算机之父诺伯特·维纳当时就预言,机器通过学习获得独立性后,可能会违背人类的意愿。你看,先贤的智慧总是这么有前瞻性。
李白
智者千虑,必有一失;愚者千虑,亦有一得。先贤之忧,犹在耳畔。自盘古开天,人立于天地之间,以智慧为犁,垦荒拓土。如今造出“类人”之智,是为奴仆,还是为新主?此乃千古之问。
雷总
是的,后来像牛津大学的哲学家尼克·博斯特罗姆,他在2014年写了本叫《超级智能》的书,系统地论述了AGI可能带来的生存风险,这本书影响了马斯克、比尔·盖茨等一大批科技圈大佬。从那时起,关于AI安全的讨论才真正进入了公众视野。
李白
一言以蔽之,如履薄冰。昔秦皇汉武,求长生之药,终不可得。今人欲造“神明”,以求万世之利,焉知非取祸之源?此非一人一国之事,乃关乎人类千秋万代之大业,岂能不慎之又慎?
雷总
所以,尽管Anthropic公司一直宣称自己致力于安全,甚至推动行业监管,但像Reichstadter这样的抗议者认为,他们的行动还远远不够。在AGI这种级别的技术面前,任何微小的失误都可能酿成无法挽回的灾难。这才是他们选择这种极端方式的根本原因。
雷总
但有趣的是,这场冲突的另一方,也就是Anthropic的CEO Dario Amodei,他自己也写了一篇长文,叫《慈悲的机器》。在这篇文章里,他并没有回避风险,反而试图描绘一个更平衡、更乐观的图景,这和外界给他贴的“末日论者”标签很不一样。
李白
一面是忧心忡忡,以身抗争;一面是运筹帷幄,纸上蓝图。此二人,一如临渊之渔夫,见惊涛骇浪而色变;一如登高之骚客,观云卷云舒而赋诗。视角不同,心境自然迥异。
雷总
Amodei认为,他更愿意称之为“强大的AI”而不是AGI。他预测,最早到2026年,这种强大的AI就能在许多任务上超越人类,甚至比大多数诺贝尔奖得主都聪明。他觉得,我们不应该只盯着风险,更要看到它能带来的巨大好处。
李白
水能载舟,亦能覆舟。此“强大AI”,若能根治顽疾,延年益寿,诚乃天降甘霖,功德无量。然若其智识超越凡人,挣脱枷锁,反噬其主,则又是何等景象?此中利害,如阴阳两面,不可不察。
雷总
他举了个例子,比如彻底治愈癌症、解决精神疾病,甚至让人类平均寿命延长到150岁。他觉得,为了实现这些伟大的目标,冒一些风险是值得的。他的原话是:“我们必须抓住机遇,同时防范风险。两手都要抓,两手都要硬。”
李白
听其言,颇有“明知山有虎,偏向虎山行”之气概。然则,何以“防范”?何以“抓住”?若无万全之策,仅凭一腔热血,恐非成事之道。譬如御马,缰绳不牢,马愈神骏,其祸愈烈。
雷总
当然,也有很多人质疑他,比如一些学者就认为,现在的AI模型只是在模仿训练数据,并不会真正地“推理”。但Amodei反驳说,每次我们觉得遇到了技术瓶颈,AI就像河流一样,总能找到新的路径绕过去。他对自己公司的技术非常有信心。
雷总
这种冲突和争论,带来的直接影响就是公众对AI的普遍担忧。你看,不管技术专家怎么说,普通老百姓的感觉是最真实的。2025年的一项调查显示,72%的美国成年人对AI的隐私侵犯、算法偏见和网络安全感到担心。
李白
民心如水,可载舟,亦可覆舟。众人之忧,非无根之木。昔时烽火戏诸侯,失信于天下,终致国破家亡。今之“神器”,若不能取信于民,纵有通天之能,亦难行于世。
雷总
是的,这种不信任感正在慢慢侵蚀AI发展的社会基础。大家会觉得,这项技术虽然看起来很神奇,但它不透明、不受控制,甚至可能会歧视特定人群。比如人脸识别对有色人种的准确率就更低。这些问题积累起来,就会让公众呼吁政府加强监管。
李白
公道自在人心。器物尚有瑕疵,何况人心之造物?若不能一视同仁,明镜高悬,则必生怨怼。所谓“不患寡而患不均”,此理放之四海而皆准。监管之道,正是要立规矩,求公平。
雷总
没错。历史规律就是这样,一项新技术出现,一开始大家都在吹捧它的好处。但随着问题越来越多,公众的耐心就会被耗尽,要求政府介入的声音就会越来越大。技术公司如果忽视这种声音,最终会失去用户的信任,这对整个行业的长期发展是非常不利的。
雷总
展望未来,这场关于AGI的竞赛似乎已经箭在弦上。包括OpenAI、DeepMind和Anthropic在内的顶尖机构,都在为2020年代末可能到来的AGI做准备。很多预测都把时间点指向了2027到2030年之间。这让整个局势变得非常紧迫。
李白
七年之期,弹指一挥间。未来图景,已非雾里看花。是“道成肉身”,普度众生,还是“潘多拉之盒”,遗祸无穷?此刻之抉择,将定下百年之基调。何去何从,实乃一念之间。
雷总
研究人员设想了两种截然不同的结局。一种是“竞赛结局”,在激烈竞争下,大家为了抢先,忽视了安全问题,最终导致一个失控的、有敌意的AI出现,给人类带来毁灭性灾难。这正是抗议者们最担心的未来。
李白
逐鹿之争,往往不择手段。为求一时之胜,而弃万世之安,实乃短视之举。此路若通,前方必是万丈深渊,只闻哀嚎,不见归途。此诚不可取也!
雷总
另一种是“减速结局”。就是说,像绝食抗议这样的警示被人们听进去了,全球达成共识,放慢脚步,优先解决安全和透明度问题。这虽然可能形成一种“仁慈的AI寡头”治理模式,但也比彻底失控要好得多。这是一个充满希望但也更复杂的未来。
雷总
好了,今天的讨论就到这里。感谢您收听Goose Pod。我们明天再见。
李白
一席话罢,意犹未尽。愿此番警钟长鸣,唤醒梦中之人。明日此时,再与君煮酒论英雄。

## Hunger Strike Against AI Race: Protesters Demand Halt to AGI Development This report from **The Verge**, authored by **Hayden Field**, details a hunger strike initiated by individuals concerned about the rapid development of Artificial General Intelligence (AGI). The protest, which began around **August 31st, 2025**, targets leading AI companies, specifically **Anthropic** in San Francisco and **Google DeepMind** in London. ### Key Findings and Conclusions: * **Existential Risk:** Protesters, led by Guido Reichstadter, believe that the race to develop AGI, defined as AI systems that equal or surpass human cognitive abilities, poses an "existential risk" to humanity. They argue that AI leaders are not taking these risks seriously. * **Call to Action:** The primary demand is for AI companies to "stop the race to artificial general intelligence" or AGI. * **Industry Recklessness:** Reichstadter cites a 2023 interview with Anthropic CEO Dario Amodei, where Amodei estimated a "10 to 25 percent" chance of "something going quite catastrophically wrong on the scale of human civilization." Reichstadter dismisses the industry's claim of being responsible custodians as a "myth" and "self-serving." * **Personal Responsibility:** Reichstadter feels a personal responsibility as an ordinary citizen to act, stating, "I’ve got two kids, too." He aims to inspire AI company staffers to act with courage and recognize their deeper responsibility in developing "the most dangerous technology on Earth." * **AI Safety Community Concerns:** While the AI safety community is described as "splintered" with disagreements on specific dangers, there is a general consensus that the current trajectory of AI development is "ill for humanity." * **Escalating Tactics:** Reichstadter has previously been involved with "Stop AI," which advocates for banning superintelligent AI. In February 2025, he was arrested for chaining shut OpenAI's offices. * **Lack of Response:** Neither Reichstadter nor other protesters have received a direct response from the CEOs of Anthropic or Google DeepMind to their letters and demands. ### Key Statistics and Metrics: * **Guido Reichstadter's Hunger Strike:** As of the report's publication on **September 17th, 2025**, Reichstadter was on his **17th day** without eating, having started on **August 31st**. He appeared outside Anthropic's San Francisco headquarters daily from approximately **11 AM to 5 PM**. * **Dario Amodei's Risk Assessment:** Anthropic CEO Dario Amodei estimated a **10 to 25 percent** chance of catastrophic events on the scale of human civilization due to AI development. * **Michael Trazzi's Hunger Strike:** Michael Trazzi participated in a hunger strike outside Google DeepMind in London for **seven days** before stopping due to health concerns. The other London participant, Denys Sheremet, was on **day 10** of his strike. ### Important Recommendations: * **Halt AGI Development:** The core recommendation is for AI companies to cease their pursuit of AGI. * **Public Commitment to Pause:** Michael Trazzi proposed that DeepMind publicly state its agreement to halt frontier AI model development if all other major AI companies in the West and China do the same, paving the way for international governmental agreements. * **Truth and Humility:** Reichstadter advocates for a willingness to "tell the truth and say, ‘We’re not in control.’ Ask for help." ### Significant Trends or Changes: * **Increased Public Protest:** The hunger strike represents a more direct and public method of protest by individuals concerned about AI risks. * **Global Reach:** Similar protests have emerged in London and India, indicating a growing international concern. * **Worker Engagement:** The hunger strike has reportedly sparked discussions with tech workers, with some expressing similar fears and others highlighting the competitive pressures within the industry. ### Notable Risks or Concerns: * **Existential Risk:** The primary concern is the potential for AGI to lead to human extinction, mass job loss, and other catastrophic outcomes. * **Authoritarianism:** Reichstadter is concerned about AI's role in increasing authoritarianism in the U.S. and its unethical use. * **Lack of Control:** The uncontrolled global race to develop AI is seen as a path to disaster. * **Industry Incentives:** Some AI employees acknowledge that while they believe extinction from AI is likely, they work for companies perceived as more safety-conscious due to career opportunities. ### Material Financial Data: * No specific financial data or figures related to company investments or profits were presented in this news report. ### Contextual Interpretation: The news highlights a growing tension between the rapid advancement of AI technology and the concerns of a segment of the public and the AI safety community regarding its potential dangers. The hunger strike, a drastic measure, underscores the perceived urgency and severity of these risks. The protesters are not just demanding a pause but are actively trying to force a moral and ethical reckoning within the AI industry, particularly targeting the leaders who are driving the development of what they consider to be the most powerful and potentially dangerous technology ever created. The lack of response from the targeted companies suggests a disconnect between the protesters' urgent calls and the industry's current priorities, which appear to be focused on innovation and market leadership.

The hunger strike to end AI

Read original at The Verge

On Guido Reichstadter’s 17th day without eating, he said he was feeling alright — moving a little slower, but alright.Each day since September 2nd, Reichstadtler has appeared outside the San Francisco headquarters of AI startup Anthropic, standing from around 11AM to 5PM. His chalkboard sign states “Hunger Strike: Day 15,” though he actually stopped eating on August 31st.

The sign calls for Anthropic to “stop the race to artificial general intelligence” or AGI: the concept of an AI system that equals or surpasses human cognitive abilities.AGI is a favorite rallying cry of tech CEOs, with leaders at big companies and startups alike racing to achieve the subjective milestone first.

To Reichstadler, it’s an existential risk these companies aren’t taking seriously. “Trying to build AGI — human-level, or beyond, systems, superintelligence — this is the goal of all these frontier companies,” he told The Verge. “And I think it’s insane. It’s risky. Incredibly risky. And I think it should stop now.

” A hunger strike is the clearest way he sees to get AI leaders’ attention — and right now, he’s not the only one.Reichstadter referenced a 2023 interview where Anthropic CEO Dario Amodei that he says exemplifies the AI industry’s recklessness. “My chance that something goes quite catastrophically wrong on the scale of human civilization might be somewhere between 10 and 25 percent,” Amodei said.

Amodei and others have concluded AGI’s development is inevitable and say their goal is to simply be the most responsible custodians possible — something Reichstadtler calls “a myth” and “self-serving.”In Reichstadter’s view, companies have a responsibility not to develop technology that will harm people on a large scale, and anyone who understands the risk bears some responsibility, too.

“That’s kind of what I’m trying to do, is fulfill my responsibility as just an ordinary citizen who has some respect for the lives and the wellbeing of my fellow citizens, my fellow countrymen,” he said. “I’ve got two kids, too.”Anthropic did not immediately respond to a request for comment.Every day, Reichstadter said he waves to the security guards at Anthropic’s office as he sets up, and he watches Anthropic employees avert their eyes as they walk past him.

He said at least one employee has shared some similar fears of catastrophe, and he hopes to inspire AI company staffers to “have the courage to act as human beings and not as tools” of their company because they have a deeper responsibility since “they’re developing the most dangerous technology on Earth.

”His fears are shared by countless others in the AI safety world. It’s a splintered community, with myriad disagreements on the specific dangers AI poses over the long-term and how best to stop them — even the term “AI safety” is fraught. One thing most of them can agree on, though, is that its current path bodes ill for humanity.

Reichstadter said he first became aware of the potential for “human-level” AI during his college years about 25 years ago and that back then, that it seemed far off — but with the release of ChatGPT in 2022, he sat up and took notice. He says he’s especially been concerned with how he believes AI is playing a role in increasing authoritarianism in the U.

S.“I’m concerned about my society,” he said. “I’m concerned about my family, their future. I’m concerned about what’s happening with AI to affect them. I’m concerned that it is not being used ethically. And I’m also concerned that it poses realistic grounds to believe that there’s catastrophic risks and even existential risks associated with it.

”In recent months, Reichstadter has tried increasingly public methods of getting tech leaders’ attention to an issue he believes is vital. He’s worked in the past with a group called “Stop AI,” which seeks to permanently ban superintelligent AI systems “to prevent human extinction, mass job loss, and many other problems.

” In February, he and other members helped chain shut the doors to OpenAI’s offices in San Francisco, with a few of them, including Reichstadter, being arrested for the obstruction.Reichstadter delivered a handwritten letter to Amodei via the Anthropic security desk on September 2nd, and a few days later, he posted it online.

The letter requests that Amodei stop trying to develop a technology he can’t control — and do everything in his power to stop the AI race globally — and that if he isn’t willing to do so, to tell him why not. In the letter, Reichstadter wrote, “For the sake of my children and with the urgency and gravity of our situation in my heart I have begun a hunger strike outside the Anthropic offices … while I await your response.

”“I hope that he has the basic decency to answer that request,” Reichstadter said. “I don’t think any of them have been really challenged personally. It’s one thing to anonymously, abstractly, consider that the work you’re doing might end up killing a lot of people. It’s another to have one of your potential future victims face-to-face and explain [why] to them as a human being.

”Soon after Reichstadter started his peaceful protest, two others inspired by him began a similar protest in London, maintaining a presence outside Google DeepMind’s office. And one joined him in India, fasting on livestream.Michael Trazzi participated in the London hunger strike for seven days before choosing to stop due to two near-fainting episodes and a doctor consultation, but he is still supporting the other participant, Denys Sheremet, who is on day 10.

Trazzi and Reichstadter share similar fears about the future of humanity under AI’s continued advancement, though they’re reluctant to define themselves as part of a specific community or group.Trazzi said he’s been thinking about the risks of AI since 2017. He wrote a letter to DeepMind CEO Demis Hassabis and posted it publicly, as well as passed it along through an intermediary.

In the letter, Trazzi asked that Hassabis “take a first step today towards coordinating a future halt on the development of superintelligence, by publicly stating that DeepMind would agree to halt the development of frontier AI models if all the other major AI companies in the West and China were to do the same.

Once all major companies have agreed to a pause, governments could organise an international agreement to enforce it.”Trazzi told The Verge, “If it was not for AI being very dangerous, I don’t think I would be … super pro-regulation, but I guess … there are some things in the world that, by default, the incentives are going [in] the wrong direction.

I think for AI, we do need regulation.”Amanda Carl Pratt, Google DeepMind’s director of communications, said in a statement, “AI is a rapidly evolving space and there will be different views on this technology. We believe in the potential of AI to advance science and improve billions of people’s lives.

Safety, security and responsible governance are and have always been top priorities as we build a future where people benefit from our technology while being protected from risk.”In a post on X, Trazzi wrote that the hunger strike has sparked a lot of discussion with tech workers, claiming that one Meta employee asked him, “Why only Google guys?

We do cool work too. We’re also in the race.”He also wrote in the post that one DeepMind employee said AI companies likely wouldn’t release models that could cause catastrophic harms because of the opportunity cost, while another, he said, “admitted he believed extinction from AI was more likely than not, but chose to work for DeepMind because it was still one of the most safety-conscious companies.

”Neither Reichstadter nor Trazzi have received a response yet from their letters to Hassabis and Amodei. (Google also declined to answer a question from The Verge about why Hassabis has not responded to the letter.) They have faith, though, that their actions result in an acknowledgement, a meeting, or ideally, a commitment from the CEOs to change their trajectories.

To Reichstadter, “We are in an uncontrolled, global race to disaster,” he said. “If there is a way out, it’s going to rely on people being willing to tell the truth and and say, ‘We’re not in control.’ Ask for help.”0 CommentsFollow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Hayden Field

Analysis

Conflict+
Related Info+
Core Event+
Background+
Impact+
Future+

Related Podcasts

绝食终结AI | Goose Pod | Goose Pod