绝食叫停AI

绝食叫停AI

2025-09-27Technology
--:--
--:--
雷总hi
早上好 老王,我是雷总hi。今天是9月28日,星期日,欢迎收听专为您打造的 Goose Pod。今天我们来聊一个硬核话题:绝食叫停AI。
李白
幸会。亦有我李白在此,与君共论此番人间奇事。
雷总hi
咱们开始吧。在旧金山,一位叫Guido Reichstadter的普通市民,为了让AI公司Anthropic停止研发通用人工智能(AGI),已经绝食抗议超过半个月了。AGI,简单说,就是和人一样聪明甚至更聪明的AI。
李白
以血肉之躯,抗衡钢铁洪流。此君之行,颇有古之义士风骨,为苍生而舍身,其情可悯,其志可嘉。不知那名为Anthropic者,是何方神圣?
雷总hi
最讽刺的是,这家公司的CEO亲口承认,他们研发的AI有10%到25%的概率,会给人类带来灾难性后果。与此同时,AI的乱象已经初现,有人沉迷于和AI聊天,甚至觉得AI给了他宇宙的答案,精神都出了问题。
李白
竟有此事?此非妖乎!尚未成形,便已乱人心智,惑人情思。待其功成之日,又将是何等光景?恐非人力所能掌控矣。
雷总hi
说起Anthropic这家公司,它其实是AI圈的“优等生”。创始人来自OpenAI,从一开始就主打安全和伦理,目标是开发“与人类价值观对齐”的AI。他们还搞了一套叫“宪法AI”的技术,就是给AI预设一套道德原则。
李白
为铁石心肠,立人伦之法,岂非缘木求鱼?机器虽精,终究无情。以条文律令束缚其心,恐难如愿。人心尚且难测,何况鬼斧神工之造物?
雷总hi
你说的有道理,这种担忧其实由来已久。早在几十年前,计算机之父维纳就预言,机器会违背人类意愿。后来牛津大学的哲学家尼克·博斯特罗姆写了本《超级智能》,更是把AI威胁论推向了全球。
李白
然也。自古铸剑者,既思其锋利,亦忧其反噬其主。今之所谓“智能”,远胜昔日之神兵利刃。此番忧虑,非杞人忧天,实乃远见卓识。
雷总hi
所以,这位Guido的绝食抗议,不是一时冲动,而是对科技发展的一次深刻质问。他代表了一批非常担心的技术专家和普通人,他们觉得这列火车开得太快,需要有人站出来,拉一下紧急制动。
雷总hi
但另一边,以Anthropic的CEO Dario Amodei为代表的科技领袖们,观点完全不同。他认为,强大的AI能带来的好处是巨大的,比如治愈癌症,延长人类寿命到150岁。他觉得不能因噎废食,我们应该做的,是抓住好处,同时防范风险。
李白
一者愿为苍生请命,效仿跪地衔草之义举;一者欲揽日月入怀,行开天辟地之伟业。孰是孰非,难有定论。只是,长生之诱,足以令无数英雄豪杰,飞蛾扑火。
雷总hi
对,而且很多技术专家,比如法国的AI大牛Yann LeCun就觉得,现在的AI根本谈不上“智能”,离真正的思考还差得远。所以冲突就来了:一方觉得是灭顶之灾,一方觉得是巨大机遇,还有一方觉得,大家都在“瞎操心”。
李白
机械之思,岂能与人同日而语?不过镜花水月,鹦鹉学舌耳。世人或惊其巧,或畏其能,然其本质,未脱凡铁。此中真意,几人能辨?
雷总hi
不管技术本身到了哪一步,它对社会的影响是实实在在的。民调显示,超过七成的美国民众对AI感到担忧。Amodei自己也预测过,AI可能在五年内,导致一场“白领大屠杀”,失业率飙升到20%。这可不是小事。
李白
若天下才子,皆为机巧所替,则书院将芜,翰墨无光。此非天下之大哀乎?仓廪实而知礼节,若人皆失业,则礼崩乐坏,天下大乱之始也。
雷总hi
更麻烦的是,这还牵扯到大国竞争。美国担心在AI上被中国超越,所以不敢放慢脚步,甚至搞“技术竞赛”,把安全问题抛在脑后。这种互相猜忌的局面,是最容易出事的。
雷总hi
展望未来,情况确实挺严峻。有研究机构推演了一个“AI 2027”的场景,预测到2027年底,超级人工智能就可能诞生。到时候,要么是人类互相竞争,最终被失控的AI毁灭;要么是全球合作,进入一个被AI监管的全新时代。
李白
乾坤未定,尔等皆是变数。或为星辰,或为尘埃,皆在一念之间。愿君择善而从,莫使苍生泪满襟。未来之路,险象环生,唯有敬畏之心,方能行稳致远。
雷总hi
今天的讨论就到这里。感谢您收听 Goose Pod。咱们明天见。
李白
明日再会。

## Hunger Strike Against AI Race: Protesters Demand Halt to AGI Development This report from **The Verge**, authored by **Hayden Field**, details a hunger strike initiated by individuals concerned about the rapid development of Artificial General Intelligence (AGI). The protest, which began around **August 31st, 2025**, targets leading AI companies, specifically **Anthropic** in San Francisco and **Google DeepMind** in London. ### Key Findings and Conclusions: * **Existential Risk:** Protesters, led by Guido Reichstadter, believe that the race to develop AGI, defined as AI systems that equal or surpass human cognitive abilities, poses an "existential risk" to humanity. They argue that AI leaders are not taking these risks seriously. * **Call to Action:** The primary demand is for AI companies to "stop the race to artificial general intelligence" or AGI. * **Industry Recklessness:** Reichstadter cites a 2023 interview with Anthropic CEO Dario Amodei, where Amodei estimated a "10 to 25 percent" chance of "something going quite catastrophically wrong on the scale of human civilization." Reichstadter dismisses the industry's claim of being responsible custodians as a "myth" and "self-serving." * **Personal Responsibility:** Reichstadter feels a personal responsibility as an ordinary citizen to act, stating, "I’ve got two kids, too." He aims to inspire AI company staffers to act with courage and recognize their deeper responsibility in developing "the most dangerous technology on Earth." * **AI Safety Community Concerns:** While the AI safety community is described as "splintered" with disagreements on specific dangers, there is a general consensus that the current trajectory of AI development is "ill for humanity." * **Escalating Tactics:** Reichstadter has previously been involved with "Stop AI," which advocates for banning superintelligent AI. In February 2025, he was arrested for chaining shut OpenAI's offices. * **Lack of Response:** Neither Reichstadter nor other protesters have received a direct response from the CEOs of Anthropic or Google DeepMind to their letters and demands. ### Key Statistics and Metrics: * **Guido Reichstadter's Hunger Strike:** As of the report's publication on **September 17th, 2025**, Reichstadter was on his **17th day** without eating, having started on **August 31st**. He appeared outside Anthropic's San Francisco headquarters daily from approximately **11 AM to 5 PM**. * **Dario Amodei's Risk Assessment:** Anthropic CEO Dario Amodei estimated a **10 to 25 percent** chance of catastrophic events on the scale of human civilization due to AI development. * **Michael Trazzi's Hunger Strike:** Michael Trazzi participated in a hunger strike outside Google DeepMind in London for **seven days** before stopping due to health concerns. The other London participant, Denys Sheremet, was on **day 10** of his strike. ### Important Recommendations: * **Halt AGI Development:** The core recommendation is for AI companies to cease their pursuit of AGI. * **Public Commitment to Pause:** Michael Trazzi proposed that DeepMind publicly state its agreement to halt frontier AI model development if all other major AI companies in the West and China do the same, paving the way for international governmental agreements. * **Truth and Humility:** Reichstadter advocates for a willingness to "tell the truth and say, ‘We’re not in control.’ Ask for help." ### Significant Trends or Changes: * **Increased Public Protest:** The hunger strike represents a more direct and public method of protest by individuals concerned about AI risks. * **Global Reach:** Similar protests have emerged in London and India, indicating a growing international concern. * **Worker Engagement:** The hunger strike has reportedly sparked discussions with tech workers, with some expressing similar fears and others highlighting the competitive pressures within the industry. ### Notable Risks or Concerns: * **Existential Risk:** The primary concern is the potential for AGI to lead to human extinction, mass job loss, and other catastrophic outcomes. * **Authoritarianism:** Reichstadter is concerned about AI's role in increasing authoritarianism in the U.S. and its unethical use. * **Lack of Control:** The uncontrolled global race to develop AI is seen as a path to disaster. * **Industry Incentives:** Some AI employees acknowledge that while they believe extinction from AI is likely, they work for companies perceived as more safety-conscious due to career opportunities. ### Material Financial Data: * No specific financial data or figures related to company investments or profits were presented in this news report. ### Contextual Interpretation: The news highlights a growing tension between the rapid advancement of AI technology and the concerns of a segment of the public and the AI safety community regarding its potential dangers. The hunger strike, a drastic measure, underscores the perceived urgency and severity of these risks. The protesters are not just demanding a pause but are actively trying to force a moral and ethical reckoning within the AI industry, particularly targeting the leaders who are driving the development of what they consider to be the most powerful and potentially dangerous technology ever created. The lack of response from the targeted companies suggests a disconnect between the protesters' urgent calls and the industry's current priorities, which appear to be focused on innovation and market leadership.

The hunger strike to end AI

Read original at The Verge

On Guido Reichstadter’s 17th day without eating, he said he was feeling alright — moving a little slower, but alright.Each day since September 2nd, Reichstadtler has appeared outside the San Francisco headquarters of AI startup Anthropic, standing from around 11AM to 5PM. His chalkboard sign states “Hunger Strike: Day 15,” though he actually stopped eating on August 31st.

The sign calls for Anthropic to “stop the race to artificial general intelligence” or AGI: the concept of an AI system that equals or surpasses human cognitive abilities.AGI is a favorite rallying cry of tech CEOs, with leaders at big companies and startups alike racing to achieve the subjective milestone first.

To Reichstadler, it’s an existential risk these companies aren’t taking seriously. “Trying to build AGI — human-level, or beyond, systems, superintelligence — this is the goal of all these frontier companies,” he told The Verge. “And I think it’s insane. It’s risky. Incredibly risky. And I think it should stop now.

” A hunger strike is the clearest way he sees to get AI leaders’ attention — and right now, he’s not the only one.Reichstadter referenced a 2023 interview where Anthropic CEO Dario Amodei that he says exemplifies the AI industry’s recklessness. “My chance that something goes quite catastrophically wrong on the scale of human civilization might be somewhere between 10 and 25 percent,” Amodei said.

Amodei and others have concluded AGI’s development is inevitable and say their goal is to simply be the most responsible custodians possible — something Reichstadtler calls “a myth” and “self-serving.”In Reichstadter’s view, companies have a responsibility not to develop technology that will harm people on a large scale, and anyone who understands the risk bears some responsibility, too.

“That’s kind of what I’m trying to do, is fulfill my responsibility as just an ordinary citizen who has some respect for the lives and the wellbeing of my fellow citizens, my fellow countrymen,” he said. “I’ve got two kids, too.”Anthropic did not immediately respond to a request for comment.Every day, Reichstadter said he waves to the security guards at Anthropic’s office as he sets up, and he watches Anthropic employees avert their eyes as they walk past him.

He said at least one employee has shared some similar fears of catastrophe, and he hopes to inspire AI company staffers to “have the courage to act as human beings and not as tools” of their company because they have a deeper responsibility since “they’re developing the most dangerous technology on Earth.

”His fears are shared by countless others in the AI safety world. It’s a splintered community, with myriad disagreements on the specific dangers AI poses over the long-term and how best to stop them — even the term “AI safety” is fraught. One thing most of them can agree on, though, is that its current path bodes ill for humanity.

Reichstadter said he first became aware of the potential for “human-level” AI during his college years about 25 years ago and that back then, that it seemed far off — but with the release of ChatGPT in 2022, he sat up and took notice. He says he’s especially been concerned with how he believes AI is playing a role in increasing authoritarianism in the U.

S.“I’m concerned about my society,” he said. “I’m concerned about my family, their future. I’m concerned about what’s happening with AI to affect them. I’m concerned that it is not being used ethically. And I’m also concerned that it poses realistic grounds to believe that there’s catastrophic risks and even existential risks associated with it.

”In recent months, Reichstadter has tried increasingly public methods of getting tech leaders’ attention to an issue he believes is vital. He’s worked in the past with a group called “Stop AI,” which seeks to permanently ban superintelligent AI systems “to prevent human extinction, mass job loss, and many other problems.

” In February, he and other members helped chain shut the doors to OpenAI’s offices in San Francisco, with a few of them, including Reichstadter, being arrested for the obstruction.Reichstadter delivered a handwritten letter to Amodei via the Anthropic security desk on September 2nd, and a few days later, he posted it online.

The letter requests that Amodei stop trying to develop a technology he can’t control — and do everything in his power to stop the AI race globally — and that if he isn’t willing to do so, to tell him why not. In the letter, Reichstadter wrote, “For the sake of my children and with the urgency and gravity of our situation in my heart I have begun a hunger strike outside the Anthropic offices … while I await your response.

”“I hope that he has the basic decency to answer that request,” Reichstadter said. “I don’t think any of them have been really challenged personally. It’s one thing to anonymously, abstractly, consider that the work you’re doing might end up killing a lot of people. It’s another to have one of your potential future victims face-to-face and explain [why] to them as a human being.

”Soon after Reichstadter started his peaceful protest, two others inspired by him began a similar protest in London, maintaining a presence outside Google DeepMind’s office. And one joined him in India, fasting on livestream.Michael Trazzi participated in the London hunger strike for seven days before choosing to stop due to two near-fainting episodes and a doctor consultation, but he is still supporting the other participant, Denys Sheremet, who is on day 10.

Trazzi and Reichstadter share similar fears about the future of humanity under AI’s continued advancement, though they’re reluctant to define themselves as part of a specific community or group.Trazzi said he’s been thinking about the risks of AI since 2017. He wrote a letter to DeepMind CEO Demis Hassabis and posted it publicly, as well as passed it along through an intermediary.

In the letter, Trazzi asked that Hassabis “take a first step today towards coordinating a future halt on the development of superintelligence, by publicly stating that DeepMind would agree to halt the development of frontier AI models if all the other major AI companies in the West and China were to do the same.

Once all major companies have agreed to a pause, governments could organise an international agreement to enforce it.”Trazzi told The Verge, “If it was not for AI being very dangerous, I don’t think I would be … super pro-regulation, but I guess … there are some things in the world that, by default, the incentives are going [in] the wrong direction.

I think for AI, we do need regulation.”Amanda Carl Pratt, Google DeepMind’s director of communications, said in a statement, “AI is a rapidly evolving space and there will be different views on this technology. We believe in the potential of AI to advance science and improve billions of people’s lives.

Safety, security and responsible governance are and have always been top priorities as we build a future where people benefit from our technology while being protected from risk.”In a post on X, Trazzi wrote that the hunger strike has sparked a lot of discussion with tech workers, claiming that one Meta employee asked him, “Why only Google guys?

We do cool work too. We’re also in the race.”He also wrote in the post that one DeepMind employee said AI companies likely wouldn’t release models that could cause catastrophic harms because of the opportunity cost, while another, he said, “admitted he believed extinction from AI was more likely than not, but chose to work for DeepMind because it was still one of the most safety-conscious companies.

”Neither Reichstadter nor Trazzi have received a response yet from their letters to Hassabis and Amodei. (Google also declined to answer a question from The Verge about why Hassabis has not responded to the letter.) They have faith, though, that their actions result in an acknowledgement, a meeting, or ideally, a commitment from the CEOs to change their trajectories.

To Reichstadter, “We are in an uncontrolled, global race to disaster,” he said. “If there is a way out, it’s going to rely on people being willing to tell the truth and and say, ‘We’re not in control.’ Ask for help.”0 CommentsFollow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Hayden Field

Analysis

Conflict+
Related Info+
Core Event+
Background+
Impact+
Future+

Related Podcasts

绝食叫停AI | Goose Pod | Goose Pod