Goose Pod LogoGoose Pod
新研究表明:生成式AI确实正在损害你的大脑 - 工作场所洞察

新研究表明:生成式AI确实正在损害你的大脑 - 工作场所洞察

2025-12-24technology
Summary

MIT研究揭示生成式AI正损害大脑。过度依赖AI写作,导致记忆和逻辑相关神经连接下降47%,出现“元认知幻觉”,八成重度使用者记不住自己写的内容。AI正从助手变替代品,削弱独立思考能力,长此以往或致认知退化,人类需警惕。

In 30 seconds

  • MIT研究揭示生成式AI正损害大脑。过度依赖AI写作,导致记忆和逻辑相关神经连接下降47%,出现“元认知幻觉”,八成重度使用者记不住自己写的内容。AI正从助手变替代品,削弱独立思考能力,长此以往或致认知退化,人类需警惕。
  • MIT研究揭示生成式AI正损害大脑。
  • 过度依赖AI写作,导致记忆和逻辑相关神经连接下降47%,出现“元认知幻觉”,八成重度使用者记不住自己写的内容。
Read source
Published
12/18/2025
Language
Sources
1 cited
Listen
5 min listen
Published
12/18/2025
Language
Sources
1 cited
Listen
5 min listen

Quick brief

The fastest way to understand what changed, why it matters, and what to listen for in the episode.

  • MIT研究揭示生成式AI正损害大脑。过度依赖AI写作,导致记忆和逻辑相关神经连接下降47%,出现“元认知幻觉”,八成重度使用者记不住自己写的内容。AI正从助手变替代品,削弱独立思考能力,长此以往或致认知退化,人类需警惕。
  • MIT研究揭示生成式AI正损害大脑。
  • 过度依赖AI写作,导致记忆和逻辑相关神经连接下降47%,出现“元认知幻觉”,八成重度使用者记不住自己写的内容。
  • Okay, so I've been digging into this new study from MIT, published by Neil Franklin on Workplace Insight, dated December 18, 2025.

Why this summary is trustworthy

Goose Pod anchors each episode to cited reporting so listeners can verify the source material before or after they press play.

Articles reviewed
1
Distinct sources
1
Latest cited update
12/18/2025
Topic path
technology

Listen to the episode

Start with the audio, then open the transcript only when you want the line-by-line version.

--:--
--:--

What happened

MIT研究揭示生成式AI正损害大脑。过度依赖AI写作,导致记忆和逻辑相关神经连接下降47%,出现“元认知幻觉”,八成重度使用者记不住自己写的内容。AI正从助手变替代品,削弱独立思考能力,长此以往或致认知退化,人类需警惕。

December 18, 2025AI, News, WellbeingA new study from researchers at the Massachusetts Institute of Technology has raised questions about the potential impact of artificial intelligence tools on critical thinking and learning, particularly when GenAI is used as a substitute for cognitive effort rather than as an assistant.

The researchers examined how the use of large language models affects brain activity, memory and skill development over time. Although the findings have yet to undergo peer review and are based on a relatively small sample, the authors say they chose to release the results early because of the speed with which AI tools are being adopted in education and knowledge work.

The study involved 54 participants who were asked to complete a series of SAT standard essays under different conditions. One group was allowed to use ChatGPT, another relied on conventional web search, and a third completed the tasks without any external tools. Participants completed multiple essays while connected to EEG equipment that measured brain activity across 32 regions.

According to the researchers, participants using ChatGPT showed consistently lower levels of neural engagement than those in the other groups. Brain activity among this group also declined as the study progressed, suggesting a reduction in cognitive effort over time. The researchers reported that some participants increasingly relied on copying generated text rather than actively composing their essays.

The findings extended beyond immediate task performance. When participants were later asked to reproduce one of their earlier essays without assistance, those who had relied on ChatGPT showed weaker recall and less evidence of retained understanding. In contrast, participants who had initially worked without tools demonstrated stronger memory and engagement, and when later allowed to use ChatGPT, were able to enhance their arguments while retaining original structure and language.

The researchers argue that this distinction points to the importance of how AI tools are used rather than whether they are used at all. They suggest that AI may support learning and creativity when applied after cognitive effort has taken place, but may undermine long term skill acquisition if it replaces that effort entirely.

The study also arrives amid wider concerns about the quality and reliability of AI systems as they increasingly train on their own outputs, a phenomenon sometimes referred to as model collapse. Combined with the rapid uptake of AI in education and professional settings, the researchers say this raises questions about how learning, reasoning and judgement are developed in AI supported environments.

The authors conclude that further research is needed to understand how AI can be integrated into education and work without diminishing critical thinking, particularly as these tools become embedded in everyday professional practice.

Insight Publishing12/18/2025
Read original at Insight Publishing

Source coverage

Okay, so I've been digging into this new study from MIT, published by Neil Franklin on Workplace Insight, dated December 18, 2025. It's got me thinking about how GenAI is really shaping our cognitive processes, especially in the context of critical thinking and learning. This study, "A new study suggests that GenAI...

The Study's Core:

Deeper analysis

Full source content

December 18, 2025AI, News, WellbeingA new study from researchers at the Massachusetts Institute of Technology has raised questions about the potential impact of artificial intelligence tools on critical thinking and learning, particularly when GenAI is used as a substitute for cognitive effort rather than as an assistant.

The researchers examined how the use of large language models affects brain activity, memory and skill development over time. Although the findings have yet to undergo peer review and are based on a relatively small sample, the authors say they chose to release the results early because of the speed with which AI tools are being adopted in education and knowledge work.

The study involved 54 participants who were asked to complete a series of SAT standard essays under different conditions. One group was allowed to use ChatGPT, another relied on conventional web search, and a third completed the tasks without any external tools. Participants completed multiple essays while connected to EEG equipment that measured brain activity across 32 regions.

According to the researchers, participants using ChatGPT showed consistently lower levels of neural engagement than those in the other groups. Brain activity among this group also declined as the study progressed, suggesting a reduction in cognitive effort over time. The researchers reported that some participants increasingly relied on copying generated text rather than actively composing their essays.

The findings extended beyond immediate task performance. When participants were later asked to reproduce one of their earlier essays without assistance, those who had relied on ChatGPT showed weaker recall and less evidence of retained understanding. In contrast, participants who had initially worked without tools demonstrated stronger memory and engagement, and when later allowed to use ChatGPT, were able to enhance their arguments while retaining original structure and language.

The researchers argue that this distinction points to the importance of how AI tools are used rather than whether they are used at all. They suggest that AI may support learning and creativity when applied after cognitive effort has taken place, but may undermine long term skill acquisition if it replaces that effort entirely.

The study also arrives amid wider concerns about the quality and reliability of AI systems as they increasingly train on their own outputs, a phenomenon sometimes referred to as model collapse. Combined with the rapid uptake of AI in education and professional settings, the researchers say this raises questions about how learning, reasoning and judgement are developed in AI supported environments.

The authors conclude that further research is needed to understand how AI can be integrated into education and work without diminishing critical thinking, particularly as these tools become embedded in everyday professional practice.

How this page is built

Goose Pod turns cited reporting into a public episode summary first, then pairs that summary with audio playback so listeners can check the source material before they decide how deeply to engage.

The goal is to make this page useful as a news landing page first, while still giving listeners transcript access, related episodes, and direct links back to the original publishers.

Cited sources

More on this topic

About this page

Goose Pod turns cited reporting into a public episode summary first, then pairs that summary with audio playback so listeners can compare the recap with the underlying source material.

This page reviewed 1 article across 1 source, with the latest cited update on 12/18/2025.

Explore related pages