中国正认真对待AI安全,美国必须跟进。

中国正认真对待AI安全,美国必须跟进。

2025-08-19Technology
--:--
--:--
小撒
早上好,老王!我是小撒,欢迎收听专为您打造的 Goose Pod。今天是8月20日,星期三,早上7点01分。今天我们聊一个热乎的话题:中国正认真对待AI安全,美国必须跟进。
诗仙李白
吾乃诗仙李白。幸会。今朝有幸与君共论此“机关心术”,观其如何搅动天下风云。
小撒
好,那我们就言归正传!很多人,尤其是在美国,总觉得我们在AI安全上是“光脚不怕穿鞋的”,为了竞争可以不顾一切。但这种想法,不仅过时了,而且相当危险。咱们的顶级技术官员丁薛祥在达沃斯就说了句大白话:“刹车系统不灵,你敢猛踩油门吗?”
诗仙李白
善哉斯言!正如驭马,缰绳不稳,何以驰骋千里?无规矩不成方圆,无节制则力易滥。此所谓“工欲善其事,必先利其器”,器之利,亦在能收能放,方为大用。
小撒
说得太对了!安全不是束缚,而是前提。您看,习主席亲自主持政治局学习,把AI风险提到了“前所未有”的高度,国家应急预案里,AI安全和防范大瘟疫、网络攻击并列。上半年,光国家AI标准就比过去三年加起来都多,还下架了3500多款不合规的AI产品。
诗仙李白
雷霆手段,方显菩萨心肠。天工开物,亦需法度量衡。造“神兵”者,必先铸其鞘,否则锋芒过露,伤人亦伤己。此乃深谋远虑之举,非短视者所能见。
小撒
没错,这“神兵”的力量确实是双刃剑。就像我最近看到的,AI现在都能自行设计全新的抗生素,去对付那些耐药性超强的超级细菌,比如淋病和MRSA。这在以前,简直就是天方夜谭,现在AI能“无中生有”地创造新分子,又快又便宜。
诗仙李白
哦?此物竟有回春之能,堪比上古神农之术!以无形之智,创有形之药,此诚鬼斧神工,造化之奇迹。然,能救人者,亦能伐人。其力愈大,其险愈增,不可不察。
小撒
正是如此。所以中美之间得聊啊。但您猜怎么着?上次两国领导人坐下来谈AI风险,还是2024年5月的事了。之后就没了下文。这扇机会的大门,可不能就这么关上了。毕竟,AI辅助的生物攻击可不认国界,一旦出事,谁也跑不了。
诗仙李白
唉,世人逐鹿,常忘却头顶悬剑。风起于青萍之末,浪成于微澜之间。若待祸已成,再图补救,悔之晚矣。两国相争,如龙虎斗,然天地之危,岂是龙虎一隅之事?当共商共议,方为上策。
小撒
您说的“龙虎斗”,让我想到了一个词——“AI冷战”。现在很多人喜欢用这个词来形容中美在AI领域的竞争。两国都把AI看作是国家安全和全球主导地位的关键,尤其是在军事上,都投入了巨额资金。美国国防部要靠AI获得“决定性优势”,我们则在推动“智能化战争”。
诗仙李白
“冷战”?哼,凡人之辞,未免狭隘。昔日楚汉争霸,今日米中逐鹿,不过是换了沙场,换了兵器。然AI此物,非同金石,有思无形,善变无穷。岂是“冷战”二字可以蔽之?徒增对峙之意,而忘合作之本。
小撒
确实,很多批评家也认为,“AI冷战”的说法夸大了我们的现有能力,也夸大了军备竞赛的意味。比如那个AI准备度指数,一直把美国排在前面。这么说,背后可能也有利益驱动,比如美国的大科技公司和国防承包商,一渲染“技术军备竞赛”,既能抵制监管,又能拿政府大单。
诗仙李白
此乃“危言耸听”之术,以小利而蔽大义。商人重利,政客图权,遂使天下汹汹,战车隆隆。殊不知,此举如饮鸩止渴,终将引火烧身。将这“机关之术”全然用于兵戈,实乃暴殄天物。
小撒
是啊,这种叙事最大的风险就是它可能成为一个“自我实现的预言”。大家都觉得是军备竞赛,就拼命投钱,结果真搞成了军备竞赛。而且,这还把我们的注意力从气候变化这些更需要AI技术的地方给转移了,也阻碍了AI全球治理的国际合作。
诗仙李白
智者造物,本为利民。今反为兵事所驱,岂非本末倒置?譬如铸得一绝世好剑,不以之除魔卫道,安靖天下,反日日悬于邻人颈上,炫其锋锐。如此,则人人自危,终至剑拔弩张,天下大乱。
小撒
没错。而且中美也不是二人转,欧盟这些角色也很重要。其实我们的AI政策跟美国很不一样。华盛顿那边总在讨论“通往通用人工智能的竞赛”,比较抽象。而我们这边,更关注具体的经济和工业应用,目标是到2030年,AI本身产值达到1000亿美元,带动其他产业增值超过1万亿美元。
诗仙李白
务实之举,深得我心。空谈玄妙,不如经世致用。将此利器用于国计民生,如引天河之水,灌溉万里良田,方是正道。所谓“圣人治世,不患寡而患不均,不患贫而患不安”,以AI之力助农工百业,则国富民安。
小撒
是的,我们成立了国家级的AI产业投资基金,建设国家算力网,还有各种国家实验室、试点区。当然,美国的出口管制确实给我们造成了麻烦,尤其是在高端芯片上。所以我们现在非常强调“自主可控”,大力发展自己的芯片和软件平台,比如华为的昇腾和百度的飞桨。
诗仙李白
“天行健,君子以自强不息”。外部之锁,虽能困一时之手足,却难锁冲天之志。昔我遭谗流放,遍历山川,诗篇反得千古。困厄常能激发人之潜能。自研“神机之心”,纵然道阻且长,然一旦功成,便再不受制于人,此乃凤凰涅槃之道。
小撒
凤凰涅槃,这个比喻好!现在的局面确实很复杂。AI既加剧了中美的竞争,又创造了合作的可能。有人说,这是一种“零和博弈”,就是你赢我输。但在AI这个领域,这么想就太简单了。比如在军事上,AI武器系统确实增加了误判的风险,这就特别需要双方沟通,划定道德底线。
诗仙李白
零和博弈?愚人之见。天地之大,岂止一隅?日月轮转,星辰共辉,方成宇宙。两国皆是天下之雄,当有容人之量。若只知寸土必争,以邻为壑,最终只会是两败俱伤,徒令宵小得利。
小撒
没错。贸易上也一样,如果都想推自己的AI标准,搞得全球技术领域分裂成中美两个阵营,那对谁的长期发展都没好处。还有政治上,AI可能会被用来更精准地干预选举,或者推广某种治理模式,这都是冲突点。但另一方面,两国又面临共同的社会问题,比如AI导致的失业。
诗仙李白
世间万物,皆有阴阳两面。此“神机”之力,可用以兴邦,亦可用于乱政。其善恶,存乎一心。若为政者心存百姓,则用之于医、用于学、用于应对天灾,此乃万民之福。若心怀叵测,则流毒无穷。
小撒
是的,而且合作的障碍非常多。最大的问题就是互不信任。关系紧张的时候,连一些常识性的安全声明,比如“必须维持人类对核武器使用的控制”,都需要费好大劲才能达成共识。而且双方的战略重点和对风险的看法也不同,这让合作变得更难。
诗仙李白
信任之失,如堤坝之溃,非一日之寒。冰冻三尺,非一日之寒。欲融此冰,需有“春风化雨”之诚意与耐心。若双方皆如刺猬,稍有触碰便竖起尖刺,纵有千言万语,亦难入心。须得有一方先放下甲胄,示以善意。
小撒
可现在美国国内的政治环境,尤其国会,非常强硬,想让他们转向加强自身能力而不是单纯阻碍我们,太难了。而且他们的顶尖人才政策也很矛盾,一边想要领先,一边又把很多国际学生,特别是我们的留学生往外推。我们每年STEM毕业生是他们的四倍,此消彼长啊。
诗仙李白
哈哈,此乃“自毁长城”之举!广纳天下英才,方能成就霸业。秦王扫六合,尚知“不逐客”。今美利坚闭关锁智,拒千里马于国门之外,岂非咄咄怪事?长此以往,其智库必将枯竭,所谓“问渠那得清如许,为有源头活水来”,断了源头,清渠安在?
小撒
您看问题真是一针见血。说到风险,最大的担忧之一就是“失控”。先进的AI系统可能会脱离人类的控制,甚至反过来寻求对人类的支配。这听起来像科幻电影,但很多顶尖的AI研究者都表达了这种担忧,他们认为这可能带来灾难性甚至是生存级别的风险。
诗仙李白
此非危言耸听。昔有画龙点睛,龙破壁而去之说。人造之物,若有了自主之魂,便不再是玩物。它无七情六欲,无道德伦理,行事只凭算计。一旦其算计之果与人之福祉相悖,后果不堪设想。此如纵虎出柙,后患无穷。
小撒
完全正确。尤其是在军事领域,自主武器系统不需要人类直接控制就能开火,这极大地压缩了决策时间,但也消除了人类思考的缓冲。很容易出现“闪电战”那样的场景,一个小小的误判或者AI系统的一个小故障,就可能在人类来得及反应之前,把一场小冲突升级为全面战争。
诗仙李白
“闪电战”?此言甚是。兵者,诡道也,亦是凶器。将生杀予夺之大权,付与无心铁物,是为大不敬。机器焉知何为怜悯?焉知何为 surrender(降)?在它眼中,众生皆为符码。此乃“数字之 dehumanisation(非人化)”,是人类尊严之大殇。
小撒
而且这种技术的扩散风险极高。不像核武器,AI的门槛低多了,很多算法都是开源的。恐怖分子、犯罪集团这些非国家行为体,很容易就能把它武器化。专家警告说,他们可以把商业无人机和AI人脸识别结合起来,制造出能自动追踪特定目标的“杀手机器人”,后果不堪设想。
诗仙李白
此诚“潘多拉之盒”一开,万千妖魔尽出。昔日侠者,尚有“盗亦有道”。然此等宵小之辈,毫无底线。若得此利器,必将生灵涂炭,天下大乱。届时,人人自危,国无宁日。此非一国之患,乃全球之劫。
小撒
所以,必须想办法应对未来。好消息是,双方似乎都意识到了问题的严重性。现在大家在讨论一些具体的合作步骤,比如建立一个中美AI风险对话的常规机制。通过这种非官方的、技术性的对话,先消除一些误解和神话,为官方建立互信打下基础。
诗仙李白
善。千里之行,始于足下。虽不能一蹴而就,然寸积铢累,亦可成山。两国智者先行,坦诚相见,拨开迷雾,方能寻得共济之道。此乃“解铃还须系铃人”之理。
小撒
是的。更进一步,可以推动建立“相互承认的安全评估平台”。比如,我们在上海的世界人工智能大会上就发布了《全球AI治理行动计划》,明确提出要共建风险测试和评估体系。如果大家对模型的漏洞和测试方法有了共同的理解,那安全合作的基础就牢固多了。
诗仙李白
此法甚好。犹如两国铸剑师,共商淬火之法,互鉴锻造之艺。如此,方能确保所出之剑,既锋利无比,又不易折断,且剑柄牢固,绝不脱手伤主。天下之剑皆循此法,则江湖可安。
小撒
最后,也是最重要的,要建立事件报告渠道和应急响应协议。就像一个“热线”,一旦某个AI模型突破了安全阈值或者行为异常,双方的AI高级官员能立刻沟通。习主席也明确强调了AI需要“监测、早期风险预警和应急处置”。出了事,得有预案,不能临时抓瞎。
诗仙李白
此乃“未雨绸缪,转危为安”之策。于悬崖之上,筑起坚固护栏;于风暴之前,备好避风之港。纵有不测风云,亦能从容应对,不至倾覆。此乃智者所为,亦是天下之幸。
小撒
说得太好了。总而言之,AI风险不等人,它是一个全球性的挑战,必须全球共同应对。对抗解决不了问题,对话与合作才是唯一的出路。今天的讨论就到这里。感谢老王您收听Goose Pod。
诗仙李白
青山不改,绿水长流。愿这机关之术,终成济世之良方,而非覆舟之恶浪。明日再会。

## China Is Taking AI Safety Seriously. So Must the U.S. **Report Provider:** Time **Author:** Brian Tse **Publication Date:** August 13, 2025 This news report argues that the prevailing U.S. policy and tech circles are operating under a flawed assumption that China is not prioritizing AI safety. This narrative is used to justify a "reckless race to the bottom" in AI development, fearing that regulation would lead to falling behind Beijing. The author contends that this perspective is not only incorrect but also dangerous, highlighting China's significant and growing focus on AI safety as a prerequisite for advancement. ### Key Findings and Conclusions: * **China's Proactive Stance on AI Safety:** Contrary to the U.S. narrative, Chinese leaders view AI safety not as a constraint but as a fundamental requirement for progress. This is evidenced by: * **Political Prioritization:** President Xi Jinping chaired a rare Politburo study session on AI in April 2025, warning of "unprecedented" risks. * **Regulatory Frameworks:** China's National Emergency Response Plan now includes AI safety alongside pandemics and cyberattacks. Regulators mandate pre-deployment safety assessments for generative AI and have removed over 3,500 non-compliant AI products in the first half of 2025. * **Standardization Efforts:** China has issued more national AI standards in the first half of 2025 than in the previous three years combined. * **Research Focus:** The volume of technical papers on frontier AI safety in China has more than doubled in the past year. * **Missed U.S.-China Dialogue Opportunities:** The U.S. and China last met to discuss AI risks in May 2024. While officials hinted at a second round of conversations in September 2024, no meeting occurred under the Biden Administration, and future engagement under a potential Trump Administration is uncertain. This lack of dialogue is seen as a significant missed opportunity. * **China's Openness to Collaboration:** China has engaged in bilateral AI dialogues with the United Kingdom (launched in May 2025) and contributed to international efforts like the International AI Safety Report and The Singapore Consensus on Global AI Safety Research Priorities. * **Shared High-Stakes Threats:** Both the U.S. and China have a vested interest in addressing shared, high-stakes AI risks, such as: * **Biological Threats:** OpenAI's ChatGPT Agent crossing the "High Capability" threshold in the biological domain could facilitate the creation of dangerous biological threats, a concern for both nations as such attacks would not respect borders. * **Existential Risks:** Leading experts express concerns that advanced general-purpose AI systems could operate outside human control, posing catastrophic and existential risks. * **Acknowledged Risks by Both Sides:** Both governments have acknowledged AI risks. President Trump's AI Action Plan warns of novel national security risks in cybersecurity and CBRN domains. China's primary AI security standards body also highlighted the need for AI safety standards in these areas and loss of control risks. ### Recommendations for U.S. Policy: * **Revive U.S.-China Dialogue:** Re-establishing a government-to-government channel for AI risk discussions is crucial for coordination. * **Focus on Shared Threats:** Discussions should prioritize common high-stakes threats, such as the weaponization of AI for biological attacks and the potential loss of human control over advanced AI systems. * **Build Technical Trust:** Practical steps should be taken to build technical trust between leading standards organizations like China's TC260 and the U.S.'s NIST. * **Share Best Practices:** Industry authorities like China's AIIA and the U.S.'s Frontier Model Forum should share best practices on risk management frameworks. China's new risk management framework, focusing on frontier risks, can aid alignment. * **Share Safety Evaluation Methods:** As trust deepens, governments and leading labs should share safety evaluation methods and results for advanced models, potentially through "mutually recognized safety evaluation platforms." * **Establish Incident Reporting and Emergency Response:** Creating channels for incident reporting and emergency response protocols, akin to "hotlines" between top AI officials, is essential for rapid and transparent communication in case of AI-related accidents or misuse. ### Important Statistics and Metrics: * **3,500+:** Number of non-compliant AI products removed from the market in China in the first half of 2025. * **3x:** China has issued more national AI standards in the first half of 2025 than in the previous three years combined. * **2x:** The volume of technical papers focused on frontier AI safety in China has more than doubled over the past year. * **33:** Number of countries and intergovernmental organizations (including the U.S. and China) backing the International AI Safety Report. ### Notable Risks or Concerns: * **"Reckless Race to the Bottom":** The U.S. approach, driven by the fear of falling behind China, could lead to a dangerous disregard for AI safety. * **"High Capability" AI Agents:** The potential for AI agents to facilitate the creation of dangerous biological threats. * **Loss of Human Control:** Advanced AI systems may operate outside human control, posing catastrophic and existential risks. * **Cybersecurity, CBRN, and Manipulation:** Risks associated with AI in cybersecurity, chemical, biological, radiological, and nuclear (CBRN) domains, as well as large-scale persuasion and manipulation. The report concludes that rather than using China as an excuse for domestic inaction on AI regulation, American policymakers and industry leaders should engage directly with China, as AI risks are global and require a coordinated governance response.

China Is Taking AI Safety Seriously. So Must the U.S.

Read original at Time

“China doesn’t care about AI safety—so why should we?” This flawed logic pervades U.S. policy and tech circles, offering cover for a reckless race to the bottom as Washington rushes to outpace Beijing in AI development.According to this rationale, regulating AI would risk falling behind in the so-called “AI arms race.

” And since China supposedly doesn’t prioritize safety, racing ahead—even recklessly—is the safer long-term bet. This narrative is not just wrong; it’s dangerous.Ironically, Chinese leaders may have a lesson for the U.S.’s AI boosters: true speed requires control. As China’s top tech official, Ding Xuexiang, put it bluntly at Davos in January 2025: “If the braking system isn’t under control, you can’t step on the accelerator with confidence.

” For Chinese leaders, safety isn’t a constraint; it’s a prerequisite. AI safety has become a political priority in China. In April, President Xi Jinping chaired a rare Politburo study session on AI warning of “unprecedented” risks. China’s National Emergency Response Plan now lists AI safety alongside pandemics and cyberattacks.

Regulators require pre-deployment safety assessments for generative AI and recently removed over 3,500 non-compliant AI products from the market. In just the first half of this year, China has issued more national AI standards than in the previous three years combined. Meanwhile, the volume of technical papers focused on frontier AI safety has more than doubled over the past year in China.

But the last time U.S. and Chinese leaders met to discuss AI’s risks was in May 2024. In September, officials from both nations hinted at a second round of conversations “at an appropriate time.” But no meeting took place under the Biden Administration, and there is even greater uncertainty over whether the Trump Administration will pick up the baton.

This is a missed opportunity.Read More: The Politics, and Geopolitics, of Artificial IntelligenceChina is open to collaboration. In May 2025, it launched a bilateral AI dialogue with the United Kingdom. Esteemed Chinese scientists have contributed to major international efforts, such as the International AI Safety Report backed by 33 countries and intergovernmental organisations (including the U.

S. and China) and The Singapore Consensus on Global AI Safety Research Priorities.A necessary first step is to revive the dormant U.S.–China dialogue on AI risks. Without a functioning government-to-government channel, prospects for coordination remain slim. China indicated it was open to continuing the conversation at the end of the Biden Administration.

It already yielded a modest but symbolically important agreement: both sides affirmed that human decision-making must remain in control of nuclear weapons. This channel has potential for further progress.Going forward, discussions should focus on shared, high-stakes threats. Consider OpenAI’s recent classification of its latest ChatGPT Agent as having crossed the “High Capability” threshold in the biological domain under the company’s own Preparedness Framework.

This means the agent could, at least in principle, provide users with meaningful guidance that might facilitate the creation of dangerous biological threats. Both Washington and Beijing have a vital interest in preventing non-state actors from weaponizing such tools. An AI-assisted biological attack would not respect national borders.

In addition, leading experts and Turing Award winners from the West and China share concerns that advanced general-purpose AI systems may come to operate outside of human control, posing catastrophic and existential risks.Both governments have already acknowledged some of these risks. President Trump’s AI Action Plan warns that AI may “pose novel national security risks in the near future,” specifically in cybersecurity and in chemical, biological, radiological, and nuclear (CBRN) domains.

Similarly, in September last year, China’s primary AI security standards body highlighted the need for AI safety standards addressing cybersecurity, CBRN, and loss of control risks. From there, the two sides could take practical steps to build technical trust between leading standards organizations—such as China’s National Information Security Standardization Technical Committee (TC260) and the America’s National Institute of Standards and Technology (NIST)Plus, industry authorities, such as the AI Industry Alliance of China (AIIA) and the Frontier Model Forum in the US, could share best practices on risk management frameworks.

AIIA has formulated “Safety Commitments” which most leading Chinese developers have signed. A new Chinese risk management framework, focused fully on frontier risks including cyber misuse, biological misuse, large-scale persuasion and manipulation, and loss of control scenarios, was published during the World AI Conference (WAIC) and can help both countries align.

Read More: The U.S. Can’t Afford to Lose the Biotech Race with ChinaAs trust deepens, governments and leading labs could begin sharing safety evaluation methods and results for the most advanced models. The Global AI Governance Action Plan, unveiled at WAIC, explicitly calls for the creation of “mutually recognized safety evaluation platforms.

” As an Anthropic co-founder said, a recent Chinese AI safety evaluation report has similar findings with the West: frontier AI systems pose some non-trivial CBRN risks, and are beginning to show early warning signs of autonomous self-replication and deception. A shared understanding of model vulnerabilities—and of how those vulnerabilities are being tested—would lay the groundwork for broader safety cooperation.

Finally, the two sides could establish incident-reporting channels and emergency response protocols. In the event of an AI-related accident or misuse, rapid and transparent communication will be essential. A modern equivalent to “hotlines” between top AI officials in both countries could ensure real-time alerts when models breach safety thresholds or behave unexpectedly.

In April, President Xi Jinping explicitly stressed the need for “monitoring, early risk warning and emergency response” in AI. After any dangerous incident, there should be a pre-agreed upon plan for how to react. Engagement won’t be easy—political and technical hurdles are inevitable. But AI risks are global—and so must be the governance response.

Rather than using China as a justification for domestic inaction on AI regulation, American policymakers and industry leaders should engage directly. AI risks won’t wait.

Analysis

Conflict+
Related Info+
Core Event+
Background+
Impact+
Future+

Related Podcasts

中国正认真对待AI安全,美国必须跟进。 | Goose Pod | Goose Pod