大脑植入物能读出你的内心独白

大脑植入物能读出你的内心独白

2025-08-18Technology
--:--
--:--
小撒
早上好,老王!我是小撒,欢迎收听专为您打造的 Goose Pod。今天是8月19日,星期二,早上7点。今天我们请到了一位穿越时空的嘉宾,与我们一同探讨一个炸裂的话题:大脑植入物能读出你的内心独白。
诗仙李白
幸会!吾乃李白。以意念为言,此等奇术,竟现于今朝。犹如天人传音,令人神往。愿与君共探其妙。
小撒
没错!让我们马上进入正题。科学家们最近真的搞了个大新闻,他们开发出一种新的大脑植入物,可以直接把人们在脑子里想的话,就是那种“内心独白”,实时显示在屏幕上!准确率高达74%!
诗仙李白
哦?竟有此事!心声默诵,不著于口,而能为外人所知?此非传说中神仙才有的“他心通”之术么?凡人竟也能触及此等境界?实乃神工鬼斧,令人惊叹!
小撒
可以这么理解!这项技术主要是为了帮助那些因为肌萎缩侧索硬化症(ALS)或者脑干中风而无法说话的患者。它通过植入大脑运动皮层的微电极,捕捉想象说话时产生的神经信号,然后,就像一个超级翻译官,把它解码成文字。
诗仙李白
原来如此,其理若揭。以微毫之针,探脑宫之秘,解码心语之玄。此举乃为困于无声之境者,重获言语之机。善哉!此译之信达雅,不知能至何种境地?
小撒
问到点子上了!这个“翻译官”的词汇量高达12.5万,非常惊人。对于患者来说,最大的好处是,他们不再需要费力地尝试发出声音,那对他们来说太累了。现在,只要在脑子里想一想,就能以每分钟120到150字的速度进行交流。
诗仙李白
每分钟百余言,思绪流转,言语随心。昔日“心有千言,口不能述”之苦,今朝竟得以冰释。此法使人得以安坐静思,而意达四海,妙不可言!
小撒
是啊,甚至有一位参与者特别兴奋,因为他终于可以“插话”了!以前的设备速度太慢,别人都聊完了他一句话还没打出来。而且为了保护隐私,研究人员还设置了一个启动密码,在心里默念“chitty chitty bang bang”就能开始或停止转录。
诗仙李白
哈哈,竟还能断他人之言,此亦人之常情,快哉!更设心念之锁,以防私语外泄,思虑可谓周详。启停随心,收放自如,此器物已颇具灵性。
小撒
说到这,您可能觉得这是不是从石头缝里蹦出来的技术?其实不是。要想知道它的厉害,咱们得先“考古”一下,看看这脑机接口(BCI)技术是怎么一步步走来的。这故事得从上世纪70年代说起。
诗仙李白
愿闻其详。任何惊世之作,必有其源流。正如黄河之水,非一日汇成。请讲。
小撒
好嘞!70年代,一位叫雅克·维达尔的学者在美国加州大学洛杉矶分校,首次提出了“脑机接口”这个词。到了1977年,他首次实现了用脑电波在电脑上控制一个光标走迷宫。虽然很初级,但这可是开天辟地头一回!
诗仙李白
以意念驱物,初如稚子学步,蹒跚而行。然此一步,已开万世之先河。虽仅为光标之移,然其志,已然超越凡俗,近乎道矣。
小撒
您总结得太到位了!然后到了1988年,科学家们更进一步,实现了用非侵入式的脑电波控制一个机器人实体。90年代中期,第一批神经修复设备被植入了人体。这背后,其实都离不开一位叫汉斯·贝格尔的科学家在1924年发现的人类脑电活动。
诗仙李白
原来其根基在此。百年前便已窥见脑中电光火石之秘,方有今日意念成言之奇。正如千里之行,始于足下,前人栽树,后人方得乘凉。此辈先贤,功不可没。
小撒
没错。后来,尤其是从2013年开始,美国国防部高级研究计划局(DARPA)启动了一个叫“BRAIN”的计划,投入了大量资金,大大加速了这项技术的发展。就像给一匹快马又配上了最好的粮草和马鞍!
诗仙李白
国家之力,军府之财,尽皆倾注于此。凡人窥天道,欲以微躯掌乾坤之力,其志可嘉,其行亦可畏。此等投入,必有所图,不知其果如何?
小撒
成果斐然!比如有一项由美国国立卫生研究院资助的研究,为一位45岁的ALS患者植入了四个微电极阵列。经过短短16个小时的使用,系统的单词准确率就达到了惊人的97.5%,并且在植入后的8个多月里一直保持这个水平!
诗仙李白
八月之久,其效不衰,准确近乎无瑕。此非一时之功,乃长久之效。可见此术已渐趋成熟,非昔日吴下阿蒙。受助之人,其欣喜之情,可感可知。
小撒
是啊,这位参与者在研究期间总共使用了248个小时,平均每分钟能说出32个单词。这不仅仅是技术上的突破,更是人性的胜利。研究人员说,这项技术给那些渴望说话但不能说话的人带来了希望,让他们能再次与家人和朋友交谈。
诗仙李白
言语者,心之声也。使其重闻于世,乃是点亮其残存之生命,重续其与尘世之缘。此等功德,胜造七级浮屠。研发此术之人,可谓仁心仁术。
小撒
当然了,这么厉害的技术,就像一把双刃剑,有好的一面,也肯定有让人担心的地方。咱们聊了这么多它的好处,那它潜在的风险,或者说争议在哪呢?首当其冲的,就是“精神隐私”问题。
诗仙李白
此言甚是。心之所思,乃私域之疆,岂容外人窥探!若思绪皆可为他人所知,人与傀儡何异?此乃伦理之大防,不可不察。
小撒
您说到点子上了。现在已经有公司在开发一些用于提升注意力的头戴设备,如果这些设备收集的脑电波数据被滥用,比如被公司用来监视员工是否“摸鱼”,或者被广告商用来分析你的潜在欲望,那就细思极恐了。
诗仙李白
为谋阿堵物,竟售魂魄乎?此乃商贾之奸,非君子所为!役人以器,探人隐私,与窃贼何异?若此风盛行,天下将人人自危,再无片刻安宁。
小撒
所以,一位叫马塞洛·伊恩卡的学者在2017年提出了一个新概念,叫做“神经权利”,就是说我们需要新的法律和权利来保护我们的大脑数据和精神自由,确保这些技术是用来帮助我们,而不是控制我们。
诗仙李白
善哉!无规矩不成方圆,此等神通若无王法约束,必将为祸人间。“神经权利”,此名甚好,当立为法度,为天下人之心神,筑一道坚固之长城。
小撒
除了隐私,另一个巨大的争议在于公平性。如果未来这种技术可以用来增强记忆力、提升认知能力,那会不会造成新的社会鸿沟?有钱人通过技术变得更聪明,普通人被远远甩在后面。
诗仙李白
此忧非虚。富者愈智,贫者愈愚,长此以往,国将不国。此非通途,乃危道也。科技之光,当普照众生,而非专属于王侯将相。否则,与那窃国之贼,又有何别?
小撒
聊完了高大上的哲学问题,咱们再来点实在的,聊聊“钱”的事儿。这项技术无疑会催生一个巨大的新市场,创造新的产业和工作岗位。但反过来说,以后找工作,简历上是不是得写‘已安装最新款大脑芯片’?这对普通人压力可太大了。
诗仙李白
哈哈,此言虽戏,却藏真忧。若人人皆以植入奇物为荣,以天生之躯为陋,此乃本末倒置。人之可贵,在于魂,不在于器。若为求职而改换头颅,岂非咄咄怪事!
小撒
对医疗体系来说,它的影响也是革命性的。一方面,它可以大大降低残疾人士的长期护理成本。但另一方面,设备本身和手术费用极其昂贵,如何纳入医保,让普通患者用得上,是一个巨大的难题。尤其在美国,那个报销流程复杂得像迷宫一样。
诗仙李白
唉,纵有回春妙手,亦难渡这文牍之海、险资之山。悬壶济世之术,若为金钱所困,不能惠及于民,实乃憾事。此中关节,盘根错节,非有利天下之心者,不能解也。
小撒
没错,开发这些技术的公司,不光要通过食品药品监督管理局(FDA)的安全审批,还得跟保险公司、各种委员会打交道,证明自己的技术不仅有效,还划算。整个过程耗时耗力耗钱,很多小公司可能就倒在这条路上了。
小撒
那咱们展望一下未来,这技术还能怎么发展?一个重要的方向是“闭环系统”。也就是说,未来的设备不光能‘读’你大脑的想法,还能‘写’信息给大脑,比如提供触觉反馈,形成一个完整的对话。
诗仙李白
哦?既能听我心声,又能回我以意?岂非如醍醐灌顶,圣人点化?昔日十年寒窗,未来或可一朝顿悟!此景若成,人间学问之道,将焕然一新。
小撒
是的,而且人工智能的深度参与会让解码越来越精准,甚至能预测你的意图。未来,我们可能会看到更多非侵入式的设备,比如更先进的脑电波帽子,让我们通过意念就能控制家里的电器、玩游戏,让科幻电影里的场景成为日常。
诗仙李白
此诚通天之梯,亦可为覆地之渊。科技之力,可载舟,亦可覆舟。愿后人善用之,以其利天下,而非役于物,毋为器所奴。此中权衡,存乎一心。
小撒
说得太好了。科技是一面镜子,照见的是我们人类自己的欲望与智慧。今天我们探讨了脑机接口这项神奇的技术,它为无数患者带来了希望,但也向我们提出了深刻的伦理挑战。今天的讨论就到这里。
小撒
感谢您收听 Goose Pod,老王。我们明天再会!

## New Brain Implant Reads Inner Speech in Real Time **News Title:** This Brain Implant Can Read Out Your Inner Monologue **Publisher:** Scientific American **Author:** Emma R. Hasson **Publication Date:** August 14, 2025 This report details a groundbreaking advancement in brain-computer interfaces (BCIs) that allows individuals with severe paralysis to communicate by reading out their "inner speech" – the thoughts they have when they imagine speaking. This new neural prosthetic offers a significant improvement over existing technologies, which often require users to physically attempt to speak. ### Key Findings and Technology: * **Inner Speech Decoding:** The new system utilizes sensors implanted in the brain's motor cortex, the area responsible for sending motion commands to the vocal tract. While this area is also involved in imagined speech, the researchers have developed a machine-learning model that can interpret these neural signals to decode inner thoughts into spoken words in real time. * **Improved Communication for Paralysis:** This technology is particularly beneficial for individuals with conditions like Amyotrophic Lateral Sclerosis (ALS) and brain stem stroke, who have limited or no ability to speak. * **Contrast with Previous Methods:** * **Blinking/Muscle Twitches:** Older methods relied on eye movements or small muscle twitches to select words from a screen. * **Attempted Speech BCIs:** More recent BCIs require users to physically attempt to speak, which can be slow, tiring, and difficult for those with impaired breathing. This new "inner speech" system bypasses the need for physical speech attempts. * **Vocabulary Size:** Previous inner speech decoders were limited to a few words. This new device allows participants to access a dictionary of **125,000 words**. * **Communication Speed:** Participants in the study could communicate at a comfortable conversational rate of approximately **120 to 150 words per minute**, with no more effort than thinking. This is a significant improvement over attempted speech devices, which can be hampered by breathing difficulties and produce distracting noises. * **Target Conditions:** The technology is designed for individuals whose "idea to plan" stage of speech is functional, but the "plan to movement" stage is broken, a condition known as dysarthria. ### Study Details: * **Participants:** The research involved **three participants with ALS** and **one participant with a brain stem stroke**, all of whom already had the necessary brain sensors implanted. * **Publication:** The results of this research were published on Thursday in the journal *Cell*. ### User Experience and Impact: * **Comfort and Naturalism:** Lead author Erin Kunz from Stanford University highlights the goal of achieving a "naturalistic ability" and comfortable communication for users. * **Enhanced Social Interaction:** One participant expressed particular excitement about the newfound ability to interrupt conversations, a capability lost with slower communication methods. * **Personal Motivation:** Erin Kunz's personal experience with her father, who had ALS and lost the ability to speak, drives her research in this field. ### Privacy and Future Considerations: * **Privacy Safeguard:** A code phrase, "chitty chitty bang bang," was implemented to allow participants to start or stop the transcription process, ensuring private thoughts remain private. * **Ethical Oversight:** While brain-reading implants raise privacy concerns, Alexander Huth from the University of California, Berkeley, expresses confidence in the integrity of the research groups, noting their patient-focused approach and dedication to solving problems for individuals with paralysis. ### Participant Contribution: The report emphasizes the crucial role and incredible dedication of the research participants who volunteered to advance this technology for the benefit of others with paralysis.

This Brain Implant Can Read Out Your Inner Monologue

Read original at Scientific American

August 14, 20254 min readNew Brain Device Is First to Read Out Inner SpeechA new brain prosthesis can read out inner thoughts in real time, helping people with ALS and brain stem stroke communicate fast and comfortably Andrzej Wojcicki/Science Photo Library/Getty ImagesAfter a brain stem stroke left him almost entirely paralyzed in the 1990s, French journalist Jean-Dominique Bauby wrote a book about his experiences—letter by letter, blinking his left eye in response to a helper who repeatedly recited the alphabet.

Today people with similar conditions often have far more communication options. Some devices, for example, track eye movements or other small muscle twitches to let users select words from a screen.And on the cutting edge of this field, neuroscientists have more recently developed brain implants that can turn neural signals directly into whole words.

These brain-computer interfaces (BCIs) largely require users to physically attempt to speak, however—and that can be a slow and tiring process. But now a new development in neural prosthetics changes that, allowing users to communicate by simply thinking what they want to say.The new system relies on much of the same technology as the more common “attempted speech” devices.

Both use sensors implanted in a part of the brain called the motor cortex, which sends motion commands to the vocal tract. The brain activation detected by these sensors is then fed into a machine-learning model to interpret which brain signals correspond to which sounds for an individual user. It then uses those data to predict which word the user is attempting to say.

On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.But the motor cortex doesn’t only light up when we attempt to speak; it’s also involved, to a lesser extent, in imagined speech.

The researchers took advantage of this to develop their “inner speech” decoding device and published the results on Thursday in Cell. The team studied three people with amyotrophic lateral sclerosis (ALS) and one with a brain stem stroke, all of whom had previously had the sensors implanted. Using this new “inner speech” system, the participants needed only to think a sentence they wanted to say and it would appear on a screen in real time.

While previous inner speech decoders were limited to only a handful of words, the new device allowed participants to draw from a dictionary of 125,000 words.A participant is using the inner speech neuroprosthesis. The text above is the cued sentence, and the text below is what's being decoded in real-time as she imagines speaking the sentence.

“As researchers, our goal is to find a system that is comfortable [for the user] and ideally reaches a naturalistic ability,” says lead author Erin Kunz, a postdoctoral researcher who is developing neural prostheses at Stanford University. Previous research found that “physically attempting to speak was tiring and that there were inherent speed limitations with it, too,” she says.

Attempted speech devices such as the one used in the study require users to inhale as if they are actually saying the words. But because of impaired breathing, many users need multiple breaths to complete a single word with that method. Attempting to speak can also produce distracting noises and facial expressions that users find undesirable.

With the new technology, the study's participants could communicate at a comfortable conversational rate of about 120 to 150 words per minute, with no more effort than it took to think of what they wanted to say.Like most BCIs that translate brain activation into speech, the new technology only works if people are able to convert the general idea of what they want to say into a plan for how to say it.

Alexander Huth, who researches BCIs at the University of California, Berkeley, and wasn’t involved in the new study, explains that in typical speech, “you start with an idea of what you want to say. That idea gets translated into a plan for how to move your [vocal] articulators. That plan gets sent to the actual muscles, and then they carry it out.

” But in many cases, people with impaired speech aren’t able to complete that first step. “This technology only works in cases where the ‘idea to plan’ part is functional but the ‘plan to movement’ part is broken”—a collection of conditions called dysarthria—Huth says.According to Kunz, the four research participants are eager about the new technology.

“Largely, [there was] a lot of excitement about potentially being able to communicate fast again,” she says—adding that one participant was particularly thrilled by his newfound potential to interrupt a conversation—something he couldn’t do with the slower pace of an attempted speech device.To ensure private thoughts remained private, the researchers implemented a code phrase: “chitty chitty bang bang.

” When internally spoken by participants, this would prompt the BCI to start or stop transcribing. Brain-reading implants inevitably raise concerns about mental privacy. For now, Huth isn’t concerned about the technology being misused or developed recklessly, speaking to the integrity of the research groups involved in neural prosthetics research.

“I think they’re doing great work; they’re led by doctors; they’re very patient-focused. A lot of what they do is really trying to solve problems for the patients,” he says, “even when those problems aren’t necessarily things that we might think of,” such as being able to interrupt a conversation or “making a voice that sounds more like them.

” For Kunz, this research is particularly close to home. “My father actually had ALS and lost the ability to speak,” she says, adding that this is why she got into her field of research. “I kind of became his own personal speech translator toward the end of his life since I was kind of the only one that could understand him.

That’s why I personally know the importance and the impact this sort of research can have.”The contribution and willingness of the research participants are crucial in studies like this, Kunz notes. “The participants that we have are truly incredible individuals who volunteered to be in the study not necessarily to get a benefit to themselves but to help develop this technology for people with paralysis down the line.

And I think that they deserve all the credit in the world for that.”

Analysis

Conflict+
Related Info+
Core Event+
Background+
Impact+
Future+

Related Podcasts