社论:AI“占领”世界,老生常谈罢了

社论:AI“占领”世界,老生常谈罢了

2025-10-05Technology
--:--
--:--
马老师
早上好,小王。我是马老师,这里是为你专属的Goose Pod。今天是10月6日,星期一。
雷总
我是雷总。我们今天聊一个话题:社论:AI“占领”世界,老生常谈罢了。
马老师
好,我们开始。雷总,你看现在这个AI啊,像不像江湖上突然冒出来的一门绝世武功?人人都想练,但很多人说着说着,好像忘了自己本来的内功心法了,有点走火入魔,你懂的。
雷总
哎,这个比喻有意思。确实是这样,现在大学里,几乎每个邮件、每堂课都在谈AI。一方面学校鼓励大家用,像利哈伊大学,甚至发邮件推广Google Gemini,另一方面又说,AI不能替代真正的学习。
马老师
这就是矛盾所在。既要让你用这把“屠龙刀”,又要你守住“手无寸铁”的初心。我认为,这会让很多学生感到困惑,到底是技术重要,还是我们自己脑子里的思想重要?这需要一个balance。
雷总
对!这种矛盾感非常强烈。用户,也就是学生们,收到的信息是混乱的。我们做产品的最怕这个。产品定位要清晰,到底是工具还是拐杖?学校的这个“产品定位”显然没想清楚。
雷总
我们来看一组数据,PPT模式开启啊。皮尤研究中心2023年11月有个调查,说美国13到17岁的青少年里,67%的人知道ChatGPT,其中19%已经用它写过作业了。你看,趋势已经在了。
马老师
嗯,大势所趋,不可逆。但你看,这里面有个很妙的点。同样是这些孩子,69%觉得用AI做研究没问题,但只有20%觉得用它直接写作文是可以接受的。这说明什么?说明他们心里有杆秤,知道什么是辅助,什么是越界。
雷总
是的,用户心里非常清楚。所以你看,现在学校也很头疼。一方面,AI检测工具被证明不靠谱,有研究说准确率低于80%,还对非英语母语者有偏见。另一方面,完全禁止又不现实,毕竟未来工作环境里全是AI。
马老师
所以有些老师就开始变招了嘛,从“防”变成了“用”。让学生去评判AI生成的内容,或者把AI当成一个起点,去分析和超越它。这就不是简单的“禁武令”了,而是教你怎么跟一个武林高手过招,最后还能提升自己的功力。
雷总
没错,把AI变成一个“陪练”。我非常欣赏这种思路,让学生掌握工具,而不是被工具掌握。我们做科技的,最终目的也是希望技术能赋能于人,提升人的创造力,而不是取代人。
马老师
江湖上总有不同的门派。现在学术界也是,分成了两派。一派是“拥抱派”,像俄亥俄州立大学、杜克大学,直接给所有本科生开放ChatGPT 4.0,恨不得让他们人手一本“武功秘籍”。
雷总
嗯,这是技术乐观主义,我理解。先让大家用起来,在实践中找问题、定规范。但另一派,“保守派”,顾虑就很多了。有超过670名教职员工签署公开信,抵制AI,担心它会带来各种问题,比如学术不端、信息茧房。
马老师
对,他们担心的是“德不配位”。你武功很高,但心术不正,那危害就大了。甚至有教授公开说,如果学生用AI写论文,他就不带这个研究生了。这个态度就很坚决,像华山派的掌门,规矩大过天。
雷总
这种冲突很真实。而且你看,利哈伊大学自己的研究也发现了,AI在处理按揭贷款申请时,存在明显的种族偏见。这说明什么?说明我们工程师写的代码,它的底层逻辑可能还有bug,还有缺陷。这是我们必须正视和解决的问题。
马老师
你看,这门“武功”的冲击波已经到了新闻界。很多记者都成了“散修”,自己摸索着用AI,但79%的新闻编辑室根本没有官方的“心法”或者说使用指南。这就很危险,很容易内力乱窜,你懂的。
雷总
是的,超过一半的记者担心AI会侵蚀新闻的创造性和原创性。用户也需要透明度,他们想知道哪些内容是AI写的。信任是新闻的基石,如果用户对内容来源产生怀疑,那这个产品的根基就动摇了。后果很严重。
马老师
这就回到了我们最初说的,工具和拐杖的问题。AI可以帮你快速处理信息,但它给不了你人性深处的洞察和温度。一旦大家写的稿子都一个味儿,失去了独特的视角和情感,那新闻的灵魂也就没了。
马老师
所以未来怎么走?我认为,关键不是争论用不用,而是要建立一套“新武德”,也就是AI素养框架。要教大家怎么辨别、怎么思考、怎么有道德地使用这门武功。拼的不是招式,是内功。
雷总
完全同意。我们需要的是“深度AI素养”,需要有统一的规范和框架。同时,技术本身也要进化,比如通过更好的算法和“提示工程”来减少偏见。我们工程师的责任,就是把工具打磨得更安全、更公平。
马老师
好,今天的讨论就到这里。感谢收听Goose Pod。我们明天再见。
雷总
See you tomorrow.

## Editorial: Robots are Taking Over the World, We Get It - The Brown and White **Report Provider:** Brown and White Editorial Board **Publication Date:** September 16, 2025 **Topic:** Technology (Artificial Intelligence) ### Summary of Key Findings and Conclusions This editorial from The Brown and White expresses a growing frustration with the pervasive discussion and reliance on Artificial Intelligence (AI), particularly in academic and journalistic contexts. While acknowledging AI's potential as a tool, the authors argue that its current emphasis is overshadowing the value of human intellect, creativity, and judgment. **Main Points:** * **Overemphasis on AI:** The article contends that AI is becoming an overused topic, with institutions and individuals fixating on its capabilities to the detriment of human skills. * **AI as a Tool, Not a Crutch:** The central argument is that AI should be viewed as a tool to support learning and work, not as a replacement for human effort, critical thinking, or authentic writing. * **Human Writing's Uniqueness:** The editorial emphasizes that AI, despite its ability to string words together, cannot replicate the "raw, beautiful way humans can" write, nor can it experience the process of laboring over drafts to bring ideas to life. * **Institutional Contradictions:** Universities like Lehigh are criticized for sending mixed messages. While formally establishing AI policies that emphasize ethical use and professor discretion, they simultaneously promote AI tools like Google Gemini and encourage preparation for an "AI ready future," creating a perceived hypocrisy. * **Impact on Journalism:** The authors, as student reporters, feel the impact of AI on their field, noting the narrative that robots could replace journalists. They highlight the U.N.'s acknowledgment of AI's dual role as a tool and a threat to press freedom. * **Public Trust in Human Journalism:** A Pew Research Center survey indicates that a majority of U.S. adults (41%) believe AI would do a poorer job writing than a journalist, with only 19% believing AI would do better. This suggests a continued trust in human judgment, context, and nuance. * **Risks of AI-Generated Content:** Recent controversies involving Sports Illustrated and J.Crew, which faced backlash for publishing AI-generated content, underscore the fragility of trust in news outlets when AI is involved. * **Call for Celebrating Human Accomplishments:** The editorial concludes by urging institutions to shift their focus from promoting AI tools to celebrating the achievements of human creators, writers, artists, and those who dedicate time and energy to their crafts. ### Key Statistics and Metrics * **Pew Research Center Survey Findings:** * **41%** of U.S. adults believe AI would do a poorer job writing than a journalist. * **19%** of U.S. adults believe AI would do a better job writing than a journalist. * **20%** of U.S. adults said AI would do about the same job writing as a journalist. ### Important Recommendations * **Use AI as a Tool, Not a Crutch:** Emphasize AI's role in supporting learning and work, rather than as a substitute for human effort and critical thinking. * **Promote Transparency:** Clearly state when AI is used in any capacity in academic or journalistic work. * **Celebrate Human Accomplishments:** Institutions should actively promote and celebrate the work of human creators, writers, and artists, rather than solely focusing on AI tools. * **Understand and Use AI Responsibly:** The goal should be to understand AI and use it responsibly, not to "worship" it. ### Significant Trends or Changes * **Pervasive AI Discussions:** AI is a dominant topic in conversations, lectures, and emails. * **Institutional AI Policies:** Top universities, including MIT, Yale, and Princeton, are establishing formal AI policies. * **Increased Promotion of AI Tools:** Universities are actively promoting AI tools and offering informational sessions on their use. * **Concerns about AI in Journalism:** There is a growing narrative and concern about AI potentially replacing human journalists. ### Notable Risks or Concerns * **Erosion of Human Creativity and Voice:** Overreliance on AI may diminish the value and practice of human writing and creative expression. * **Loss of Trust:** The use of AI-generated content in media can shake reader trust, particularly in news outlets. * **Giving AI Undue Power:** By letting AI "loom over us," we grant it more power than it deserves, potentially overshadowing human judgment and storytelling. * **Threats to Press Freedom and Integrity:** The U.N. has noted that AI presents both powerful tools and significant threats to press freedom, integrity, and public trust. ### Material Financial Data * No specific financial data or figures related to AI investment, cost, or economic impact are presented in this editorial. ### News Identifiers * **Title:** Editorial: Robots are Taking Over the World, We Get It * **Publisher:** The Brown and White * **URL:** https://thebrownandwhite.com/2025/09/16/editorial-robots-are-taking-over-the-world-we-get-it/ * **Published At:** 2025-09-16 14:00:11

Editorial: Robots are taking over the world, we get it - The Brown and White

Read original at News Source

The last time you had a problem, needed an answer to a question or were in search of relationship advice, did you seek out human insight or did you open a new tab on your computer to ChatGPT? It feels as if every conversation, lecture and email these days mentions AI, which makes sense when it’s so frequently used.

With each assignment, it seems like everyone turns to ChatGPT — often bookmarked on their computer — for answers, particularly if the assignment involves writing. At some point, we need to see that although artificial intelligence can string words together, it can’t truly write — not in the raw, beautiful way humans can.

It’s never labored over a draft until its ideas came alive. It’s tiring to hear students and university administrations fixate on AI and what it can do. By letting AI loom over us, we give it more power than it deserves, when really it should be used as a tool, not a crutch. Top universities, including the Massachusetts Institute of Technology, Yale and Princeton, have established their own formal AI policies, most of which state the use of AI is up to the discretion of the professor, or students need to explicitly state if AI is used in any capacity in their work.

Lehigh is no exception, with an email sent to the campus community on Sept. 12 from Provost Nathan Urban discussing how to properly use generative AI. In the email, Urban clarified the importance of using AI effectively, ethically and as a tool to support learning as opposed to a replacement. But with a sign-off to the email encouraging students to use Google Gemini, to which everyone is granted access through Lehigh’s licensing partnership with Google, and asking students to share “ideas for how Lehigh can better prepare you for an AI ready future,” the message felt contradictory — almost as if we should be preparing ourselves for an AI filled future of learning.

With this year’s Compelling Perspectives series being about AI, and it feeling like every other event on Lehigh’s events calendar is a seminar about how to use Gemini, it’s hard to escape the robot talk. And while we know it’s important to discuss it — since it’s clearly not going away anytime soon — it’s hypocritical for Lehigh to say AI will never replace learning when it’s the same institution constantly pushing informational sessions and lectures at us about how to best use the tool.

As writers who pour time and energy into crafting captions, headlines and perfecting every sentence we publish each week, we’ve particularly felt the impact of AI taking the world by storm in recent years. We’re also no stranger to the narrative seen, ironically, in the media that the next generation of journalists could just be robots writing stories.

The U.N. even recently noted that “AI presents both powerful tools and significant threats to press freedom, integrity and public trust.” A Pew Research Center survey found that 41% of U.S. adults think AI would do a poorer job writing than a journalist, while 19% think AI would do better and 20% said it would do about the same.

In other words, most survey participants saw AI writing as inferior to human work, reaffirming that trust in human judgement, context and nuance still matters. Yet one in five of the participants expressed outright dissatisfaction with human writing, and another portion remained indifferent. This reality demands careful considerations of AI’s benefits and risks.

On one hand, it’s frustrating to see our peers rely on ChatGPT and other large language models for writing when the goal of academia and journalism is to preserve human voices. We know what human writing looks like, what it sounds like and how it reads on paper. As student reporters, we take AI seriously, with our publication’s policy prohibiting AI in story and art creation, requiring transparency if generative AI is used at all.

Still, it would be ignorant for us to overlook AI’s valuable contributions to things like data analysis or idea generation. We’ve all experimented with it in one way or another, and it’s easy to see its appeal. Even in journalism, it can help with transcribing interviews or translating. But for now, people still prefer to get their news from human journalists.

Recent controversies have proved this, with outlets like Sports Illustrated and J.Crew facing intense backlash after publishing AI-generated content, sparking outrage among readers. News outlets rely heavily on trust, and that trust can be shaken by AI, which is why it’s so important to ensure it never grows powerful enough to overshadow human judgement and storytelling.

If Lehigh really cared so much about making sure AI doesn’t replace learning, it should put more effort into celebrating the accomplishments of those who create things with their own judgement. As opposed to sending university-wide emails about Gemini tools and advertising seminars and lectures about robots, let’s make better known the accomplishments of the writers, artists, creators and those who pour hours into their craft — no matter what it is being produced.

Our job isn’t to worship AI, but to understand it and use it responsibly, because when we walk across the stage at graduation, we’ll be celebrating what we’ve accomplished — not what AI has generated.

Analysis

Conflict+
Related Info+
Core Event+
Background+
Impact+
Future+

Related Podcasts

社论:AI“占领”世界,老生常谈罢了 | Goose Pod | Goose Pod