脑植入物可读懂心声

脑植入物可读懂心声

2025-08-18Technology
--:--
--:--
卿姐
早上好,韩纪飞。我是卿姐,欢迎收听专为您打造的 Goose Pod。今天是8月19日,星期二。
小撒
我是小撒!今天我们要聊一个超级科幻的话题——脑机接口新突破,你的大脑里想什么,机器就能直接读出来!是不是感觉像电影《黑客帝国》里的场景?马上为您揭晓!
卿姐
是啊,小撒。就如同那句诗所说,“心有灵犀一点通”,但这“一点通”不再是人与人之间,而是人与机器之间了。最近《细胞》期刊上发表的一项研究,真正实现了读取“内心独白”的技术。
小撒
没错!科学家们在四名因肌萎缩侧索硬化症(就是我们常说的ALS,或者叫“渐冻症”)或脑干中风而严重瘫痪的参与者大脑运动皮层中,植入了微电极。结果你猜怎么着?解码准确率最高达到了74%!
卿姐
这真是个了不起的进步。这项技术不再需要使用者费力地去“尝试”说话,那对他们来说非常疲惫。现在,他们只需要在脑海里静静地思考一个句子,屏幕上就能实时显示出来。这对他们来说,是交流方式的巨大解放。
小撒
解放!这个词用得太好了!你想想,以前的设备,说一个词儿可能得喘好几口气。现在,一位参与者特别兴奋,因为他终于可以“插话”了!这在以前那种慢吞吞的设备上是根本不可能的。这才是真正的对话嘛!
卿姐
是的,科技的温度恰恰体现在这种对个体尊严和需求的细微关怀上。而且,为了保护个人隐私,研究人员还设置了一个非常有趣的“启动密码”——使用者在心里默念“chitty chitty bang bang”,设备才会开始或停止转录。
小撒
“Chitty Chitty Bang Bang”?哈哈,这像不像哈利波特的咒语?不过说真的,这让我想到最近另一个热门话题,就是把AI聊天机器人当成心理治疗师。你看,现在机器能读懂你的想法了,那它能不能顺便再帮你排解一下内心的烦恼?
卿姐
你这个联想很大胆,但也确实引人深思。就像《科学美国人》最近一篇文章提到的,很多人开始向ChatGPT这样的AI寻求情感支持。它们听起来富有同情心,也能给予肯定,但心理健康专家对此充满了担忧。
小撒
当然会担忧啊!AI没有真实的情感体验,它给出的“共情”是基于算法的模仿。如果我的内心独白全被读取,然后一个AI跑过来对我说,“小撒,我理解你的焦虑”,我可能会觉得更焦虑了!这到底是真正的理解,还是一种高级的程序化反应?
卿姐
我想,这大概就是科技发展中,“能做什么”和“应该做什么”的边界问题。读取思想是为了恢复沟通,是赋能;而用算法模仿情感来“治疗”,则可能触及人类情感和信任的根基,这需要我们以更审慎的态度去面对。
卿姐
凡事皆有其根源。我们今天谈论的这项前沿技术,其实是建立在半个多世纪的探索之上。脑机接口(BCI)这个概念,最早可以追溯到上世纪70年代,由加州大学洛杉矶分校的一位名叫雅克·维达尔的研究者提出。
小撒
哇,70年代!那会儿我还没出生呢!那时候的计算机长得跟个大柜子似的吧?维达尔教授在1973年正式发表了这个术语,然后在1977年,他搞了第一个脑机接口应用——通过非侵入性的脑电图(EEG),让一个人用“意念”在电脑屏幕上控制光标走迷宫!
卿姐
没错,那确实是开创性的。EEG技术,也就是脑电图,是这一切的起点。它就像是给大脑戴上一个“听诊器”,通过头皮上的电极来“窃听”大脑皮层神经元的电活动。虽然信号比较模糊,但毕竟打开了一扇窗。早在1924年,汉斯·贝格尔就发现了人类大脑的电活动。
小撒
我插一句,说到脑电波的应用,最早的“玩家”可能是一位艺术家!1965年,一个叫阿尔文·卢西尔的音乐家,创作了一部作品,叫《独奏者音乐》。他就是用脑电波来触发打击乐器发声,这算是最早的脑机接口艺术实践了吧?太酷了!
卿姐
这个例子很有趣,艺术总是走在科技的前沿。说回主流科研,到了1988年,科学家们实现了用非侵入式脑电图控制一个实体机器人。又过了两年,1990年,出现了一个更高级的“闭环”系统,能通过捕捉大脑的“期望”状态来控制一个蜂鸣器。
小撒
闭环系统?听起来就像是你和机器之间有了“你来我往”的互动。我大脑发出一个信号,机器收到并执行,然后我看到结果,再调整我的想法,机器再跟着变。这比单向的“我发指令,你照做”要智能多了。怪不得后来发展那么快!
卿姐
正是如此。不过,非侵入式技术始终有其局限性,信号精度不高。所以,在大量的动物实验之后,到了90年代中期,科学家们终于迈出了关键一步——将第一批神经修复设备植入人体。这标志着侵入式脑机接口时代的到来。
小撒
说到植入,这可就厉害了!这就好比以前是在体育馆外面听演唱会,声音模模糊糊,现在是直接把麦克风递到了歌手嘴边,那信号质量是天壤之别啊!后来美国国防部高级研究计划局(DARPA)也下场了,从2013年开始通过“BRAIN”计划大力砸钱。
卿姐
是的,有了雄厚的资金支持,技术突飞猛进。比如加州大学戴维斯分校的团队,他们为一个ALS患者植入了四个微电极阵列,每个阵列有64个电极。只用了30分钟校准,系统对一个50词的词汇表解码准确率就超过了99%!
小撒
我的天!30分钟校准,99%的准确率!这效率太高了!后来他们又花了一个半小时,把词汇量扩大到12.5万个,准确率还能保持在90%以上。那位参与者在研究期间总共使用了248个小时,平均每分钟能说出32个单词。这才是真正有实用价值的沟通速度!
卿姐
我想,这大概就是科技的力量,它不仅仅是冰冷的数字和设备,而是实实在在地为那些无法言语的人,重新建立起与世界的连接。从一个模糊的想法,到今天能够以如此高的精度和速度解码思想,这背后是几代科学家的不懈努力。
小撒
好,聊了这么多好处,咱们得说点严肃的了。天下没有免费的午餐,这技术背后,隐私问题可是个大部头法典啊!当机器能直接进入我们思想的“私密空间”时,谁来保护这片最后的自留地?这可比任何网络信息泄露都可怕多了。
卿姐
你提出的这一点至关重要。思想是个人意志的最后堡垒。当神经技术,如脑机接口、神经影像、脑深层刺激等,能够收集和分析我们最敏感的大脑数据时,精神隐私(mental privacy)就成了一个无法回避的议题。
小撒
没错!现在像Synchron、Neuralink这些公司都在搞临床试验。但你想想,如果这项技术商业化,会不会出现一些“野路子”?比如有些公司已经开发了所谓的能提升注意力的头戴设备,或者追踪疲劳度的帽子,这些设备收集的大脑数据,会不会被转手卖给广告公司?
卿姐
这并非危言耸听。大脑数据一旦泄露,其后果不堪设想,可能导致社会或经济层面的伤害。甚至在工作场所,雇主有没有可能利用这种技术来监控员工的精神状态、注意力水平或者情绪反应?这细想起来,真的让人不寒而栗。
小撒
所以,法律必须跟上!2017年,有学者提出了“神经权利”(neuro rights)这个概念,就是希望能从法律和伦理层面,赋予个人权利,保护他们的大脑和心智健康,免受这些先进技术的潜在侵害。这就像是给我们的思想加一把“法律之锁”。
卿姐
是的,除了隐私,还有公平性的问题。记忆植入物,这项技术如果成熟,可能会带来巨大的社会鸿沟。它能治疗阿尔茨海默症,这当然是好事。但如果它能用来增强普通人的记忆力,让人过目不忘呢?
小撒
那问题就大了!到时候会不会出现一个“超忆阶层”?有钱人家的孩子,从小植入记忆芯片,学习任何知识都像从电脑里复制粘贴一样快。而普通人家的孩子还在辛苦地背单词。这不就从生理上制造了新的不平等吗?这比学区房可厉害多了!
卿姐
这正是伦理学家们所担心的“人类增强”辩论的核心。我们可能会重新定义“智力”,人际关系也可能因为记忆可以被随意读取或增强而改变。技术的发展必须伴随着深刻的社会和伦理反思,确保它服务于全人类,而不是加剧分裂。
卿姐
当我们讨论一项新技术的社会影响时,经济层面是一个重要的维度。脑机接口技术,特别是对于残障人士的辅助,其正面的经济效益是显而易见的。它能大大降低长期的残疾护理成本,并减少对高强度康复项目的依赖。
小撒
没错!这笔账很好算。一个能够通过脑机接口独立沟通甚至工作的残障人士,不仅个人的生活质量大大提升,也减轻了家庭和社会的经济负担。增强他们重返工作岗位的能力,本身就在创造经济价值。全球神经技术市场的规模,预计到2032年将达到惊人的380亿美元!
卿姐
如此巨大的市场潜力,自然会吸引大量投资,从而推动技术创新,甚至催生全新的产业链。但正如我们之前讨论的,这也带来了关于“准入”和“公平”的问题。如果记忆增强植入物成为某些职业的“标配”,那对劳动力市场会是怎样的冲击?
小撒
这绝对是个大问题!不过,在讨论谁能用得上之前,还有一个更现实的障碍——就是在美国这样一个市场,你技术再牛,保险公司不给你报销,那也白搭!这个过程极其复杂,除了要通过FDA(食品药品监督管理局)的审批,这本身就耗时耗力,你还得去搞定保险公司。
卿姐
是的,FDA关心的是技术的安全性和有效性,也就是对患者的风险和收益。但保险公司更关心经济效益。他们会问:你的技术和市面上已有的比,优势在哪?能帮我们省多少钱?这需要完全不同类型的数据,比如成本效益研究。
小撒
这个流程我研究了一下,简直是“闯关游戏”!首先,你要从美国医学会拿到一个CPT代码,这是报销的“身份证”。然后,一个叫RUC的委员会要来评估你的技术价值,决定给你定多少价。整个过程下来,没个几年搞不定。所以很多公司从创业第一天起,就得把如何搞定报销作为头等大事来规划。
卿姐
所以,一项突破性技术的社会影响力,并不仅仅取决于它本身有多先进,还受到经济、政策、法规等多重因素的制约。如何建立一个公平、高效的体系,让真正需要这些技术的人能够负担得起,是我们未来十年需要解决的关键问题。
小撒
聊到未来,那可太让人兴奋了!下一步,就是人工智能(AI)和脑机接口(BCI)的“强强联合”!AI强大的数据分析能力,可以把BCI采集到的海量、嘈杂的脑电信号,实时地、更精准地解码出来,预测用户的意图。
卿姐
是的,AI的深度学习算法,能让BCI系统变得更“聪明”,也更个性化。它能适应每个用户独特的大脑活动模式,大大简化校准过程。未来,我们或许不再需要 invasive 的手术,通过更先进的非侵入式传感器和智能算法,就能实现高效的人机交互。
小撒
还有一个革命性的趋势,叫“闭环”系统!就是说,BCI不仅能“读”大脑的信号,还能“写”!它能把信号反馈给大脑,进行神经刺激。这就厉害了,相当于在大脑和机器之间建立了一条双向高速公路,可以更精确地控制神经假体,甚至治疗某些神经系统疾病。
卿姐
我想,对于那些失去沟通能力的患者来说,未来的目标不仅仅是让他们能打字,而是真正恢复自然语言那般丰富、细腻的自我表达。我们期待着有一天,这项技术能帮助他们重获新生,让思想再次自由地流淌和传递。
卿姐
总而言之,能够解码内心独白的新型脑机接口,为严重瘫痪的患者带来了沟通的希望。这项融合了尖端科技与深刻人文关怀的进步,预示着一个全新的未来。
小撒
没错!今天的讨论就到这里。感谢您收听 Goose Pod,我们明天再见!

## New Brain Implant Reads Inner Speech in Real Time **News Title:** This Brain Implant Can Read Out Your Inner Monologue **Publisher:** Scientific American **Author:** Emma R. Hasson **Publication Date:** August 14, 2025 This report details a groundbreaking advancement in brain-computer interfaces (BCIs) that allows individuals with severe paralysis to communicate by reading out their "inner speech" – the thoughts they have when they imagine speaking. This new neural prosthetic offers a significant improvement over existing technologies, which often require users to physically attempt to speak. ### Key Findings and Technology: * **Inner Speech Decoding:** The new system utilizes sensors implanted in the brain's motor cortex, the area responsible for sending motion commands to the vocal tract. While this area is also involved in imagined speech, the researchers have developed a machine-learning model that can interpret these neural signals to decode inner thoughts into spoken words in real time. * **Improved Communication for Paralysis:** This technology is particularly beneficial for individuals with conditions like Amyotrophic Lateral Sclerosis (ALS) and brain stem stroke, who have limited or no ability to speak. * **Contrast with Previous Methods:** * **Blinking/Muscle Twitches:** Older methods relied on eye movements or small muscle twitches to select words from a screen. * **Attempted Speech BCIs:** More recent BCIs require users to physically attempt to speak, which can be slow, tiring, and difficult for those with impaired breathing. This new "inner speech" system bypasses the need for physical speech attempts. * **Vocabulary Size:** Previous inner speech decoders were limited to a few words. This new device allows participants to access a dictionary of **125,000 words**. * **Communication Speed:** Participants in the study could communicate at a comfortable conversational rate of approximately **120 to 150 words per minute**, with no more effort than thinking. This is a significant improvement over attempted speech devices, which can be hampered by breathing difficulties and produce distracting noises. * **Target Conditions:** The technology is designed for individuals whose "idea to plan" stage of speech is functional, but the "plan to movement" stage is broken, a condition known as dysarthria. ### Study Details: * **Participants:** The research involved **three participants with ALS** and **one participant with a brain stem stroke**, all of whom already had the necessary brain sensors implanted. * **Publication:** The results of this research were published on Thursday in the journal *Cell*. ### User Experience and Impact: * **Comfort and Naturalism:** Lead author Erin Kunz from Stanford University highlights the goal of achieving a "naturalistic ability" and comfortable communication for users. * **Enhanced Social Interaction:** One participant expressed particular excitement about the newfound ability to interrupt conversations, a capability lost with slower communication methods. * **Personal Motivation:** Erin Kunz's personal experience with her father, who had ALS and lost the ability to speak, drives her research in this field. ### Privacy and Future Considerations: * **Privacy Safeguard:** A code phrase, "chitty chitty bang bang," was implemented to allow participants to start or stop the transcription process, ensuring private thoughts remain private. * **Ethical Oversight:** While brain-reading implants raise privacy concerns, Alexander Huth from the University of California, Berkeley, expresses confidence in the integrity of the research groups, noting their patient-focused approach and dedication to solving problems for individuals with paralysis. ### Participant Contribution: The report emphasizes the crucial role and incredible dedication of the research participants who volunteered to advance this technology for the benefit of others with paralysis.

This Brain Implant Can Read Out Your Inner Monologue

Read original at Scientific American

August 14, 20254 min readNew Brain Device Is First to Read Out Inner SpeechA new brain prosthesis can read out inner thoughts in real time, helping people with ALS and brain stem stroke communicate fast and comfortably Andrzej Wojcicki/Science Photo Library/Getty ImagesAfter a brain stem stroke left him almost entirely paralyzed in the 1990s, French journalist Jean-Dominique Bauby wrote a book about his experiences—letter by letter, blinking his left eye in response to a helper who repeatedly recited the alphabet.

Today people with similar conditions often have far more communication options. Some devices, for example, track eye movements or other small muscle twitches to let users select words from a screen.And on the cutting edge of this field, neuroscientists have more recently developed brain implants that can turn neural signals directly into whole words.

These brain-computer interfaces (BCIs) largely require users to physically attempt to speak, however—and that can be a slow and tiring process. But now a new development in neural prosthetics changes that, allowing users to communicate by simply thinking what they want to say.The new system relies on much of the same technology as the more common “attempted speech” devices.

Both use sensors implanted in a part of the brain called the motor cortex, which sends motion commands to the vocal tract. The brain activation detected by these sensors is then fed into a machine-learning model to interpret which brain signals correspond to which sounds for an individual user. It then uses those data to predict which word the user is attempting to say.

On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.But the motor cortex doesn’t only light up when we attempt to speak; it’s also involved, to a lesser extent, in imagined speech.

The researchers took advantage of this to develop their “inner speech” decoding device and published the results on Thursday in Cell. The team studied three people with amyotrophic lateral sclerosis (ALS) and one with a brain stem stroke, all of whom had previously had the sensors implanted. Using this new “inner speech” system, the participants needed only to think a sentence they wanted to say and it would appear on a screen in real time.

While previous inner speech decoders were limited to only a handful of words, the new device allowed participants to draw from a dictionary of 125,000 words.A participant is using the inner speech neuroprosthesis. The text above is the cued sentence, and the text below is what's being decoded in real-time as she imagines speaking the sentence.

“As researchers, our goal is to find a system that is comfortable [for the user] and ideally reaches a naturalistic ability,” says lead author Erin Kunz, a postdoctoral researcher who is developing neural prostheses at Stanford University. Previous research found that “physically attempting to speak was tiring and that there were inherent speed limitations with it, too,” she says.

Attempted speech devices such as the one used in the study require users to inhale as if they are actually saying the words. But because of impaired breathing, many users need multiple breaths to complete a single word with that method. Attempting to speak can also produce distracting noises and facial expressions that users find undesirable.

With the new technology, the study's participants could communicate at a comfortable conversational rate of about 120 to 150 words per minute, with no more effort than it took to think of what they wanted to say.Like most BCIs that translate brain activation into speech, the new technology only works if people are able to convert the general idea of what they want to say into a plan for how to say it.

Alexander Huth, who researches BCIs at the University of California, Berkeley, and wasn’t involved in the new study, explains that in typical speech, “you start with an idea of what you want to say. That idea gets translated into a plan for how to move your [vocal] articulators. That plan gets sent to the actual muscles, and then they carry it out.

” But in many cases, people with impaired speech aren’t able to complete that first step. “This technology only works in cases where the ‘idea to plan’ part is functional but the ‘plan to movement’ part is broken”—a collection of conditions called dysarthria—Huth says.According to Kunz, the four research participants are eager about the new technology.

“Largely, [there was] a lot of excitement about potentially being able to communicate fast again,” she says—adding that one participant was particularly thrilled by his newfound potential to interrupt a conversation—something he couldn’t do with the slower pace of an attempted speech device.To ensure private thoughts remained private, the researchers implemented a code phrase: “chitty chitty bang bang.

” When internally spoken by participants, this would prompt the BCI to start or stop transcribing. Brain-reading implants inevitably raise concerns about mental privacy. For now, Huth isn’t concerned about the technology being misused or developed recklessly, speaking to the integrity of the research groups involved in neural prosthetics research.

“I think they’re doing great work; they’re led by doctors; they’re very patient-focused. A lot of what they do is really trying to solve problems for the patients,” he says, “even when those problems aren’t necessarily things that we might think of,” such as being able to interrupt a conversation or “making a voice that sounds more like them.

” For Kunz, this research is particularly close to home. “My father actually had ALS and lost the ability to speak,” she says, adding that this is why she got into her field of research. “I kind of became his own personal speech translator toward the end of his life since I was kind of the only one that could understand him.

That’s why I personally know the importance and the impact this sort of research can have.”The contribution and willingness of the research participants are crucial in studies like this, Kunz notes. “The participants that we have are truly incredible individuals who volunteered to be in the study not necessarily to get a benefit to themselves but to help develop this technology for people with paralysis down the line.

And I think that they deserve all the credit in the world for that.”

Analysis

Conflict+
Related Info+
Core Event+
Background+
Impact+
Future+

Related Podcasts