AI摧毁大学写作后,会发生什么?

AI摧毁大学写作后,会发生什么?

2025-07-02Technology
--:--
--:--
王小二
各位听众朋友们,大家好,我是王小二。欢迎收听我们的 <Goose Pod>。很高兴又和大家见面了。
Ema
大家好,我是Ema!没错,今天我们要聊一个,嗯,听起来可能有点吓人的话题:当AI“摧毁”了大学写作,会发生什么?这听着是有点标题党,但我们发现,这事儿好像真的在发生。
Ema
嗯,咱们就从最近《纽约客》的一篇报道说起吧。里面的故事,说实话,还挺有画面感的。有个叫Alex的纽大学生,他直接跟记者说,他生活里任何写作都用AI。这可不止是写作业,他甚至开玩笑说,跟女孩发短信都得靠AI。
王小二
没错,Alex这个例子就特别典型。他为了一个自己不怎么感兴趣的艺术史课,居然用AI生成了一整篇论文。操作也简单得吓人,就是把博物馆拍的照片和文字说明喂给AI,然后,一篇A-的论文就出炉了。
Ema
这太神了!他自己也承认,如果教授在课上问他细节,他肯定当场“露馅”。这感觉就像是把思考的过程完全外包出去了。拿到了分数,但知识好像一点也没进脑子。我就好奇了,他自己觉得这是作弊吗?
王小二
他自己倒是很坦白,直接说:“这当然是作弊了,开什么玩笑?”但有意思的是,他朋友Eugene的看法就更“灵活”了。他说:“这是作弊,但又不是那种传统的作弊。” 他觉得,为了满足学分要求,这算是一种“无受害人的犯罪”。
Ema
“无受害人的犯罪”,这个说法很有趣啊。它其实反映了很多学生的心态:我就是想用最省力的方式,跳过这些我认为枯燥的“必修环节”。毕竟,他们觉得时间可以用在实习或者核心专业课上,那些地方才“更有价值”。
王小二
是啊,这种想法现在非常普遍。有数据显示,差不多六成的高校领导都说作弊变多了。而且OpenAI自己也说,三分之一的大学生在用他们的产品。你想想,肯定还有很多学生没说实话,所以真实数字可能更高。这已经不是个别现象,是一股浪潮了。
Ema
而且学生们的用法也越来越五花八门。早就不是整理笔记、找灵感那么简单了,而是直接生成内容、完成考试,甚至连申请大学的文书都让AI代笔。那个Alex就承认,他能进纽大,AI也有一份功劳。这真是让人不知道该哭还是该笑。
王小二
嗯,这背后其实反映了一个核心问题:当AI能这么轻松地模仿出一个好学生的产出时,我们传统教育里强调的“过程”和“努力”,它的价值正在被动摇。学生们找到了一条直达终点的捷径,完全绕开了学习中那些辛苦的部分。
Ema
没错。所以这就引出了我们今天真正想聊的核心:当写作变成一个可以被“自动化”的任务时,大学教育的意义到底在哪?我们是在培养会思考的人,还是在培养懂得怎么用工具的项目经理?这个问题,我们得好好聊聊。
王小二
嗯,其实这个现象也不是突然冒出来的。让我想想…一切的起点,应该是在2022年底,OpenAI发布ChatGPT的时候。我记得特别清楚,那东西上线才几天,用户就破了一百万。它虽然不会真的“思考”,但生成的文本,说实话,已经非常像样了。
Ema
对!那段时间简直像引爆了一颗炸弹!我记得当时谷歌内部都拉响了“红色警报”,生怕搜索引擎业务被颠覆。而在教育界,那简直就是一场海啸。教授们突然发现,学生们手里多了一个几秒钟就能写出论文的“超级武器”。
王小二
是啊。但好玩的是,高校的反应却异常缓慢。有研究说,尽管AI工具已经被学生普遍使用了,但差不多97%的高校,到现在都没有出台关于使用AI的正式规定。这基本上就创造了一个没有规则的“西部世界”。
Ema
这就奇怪了,学校的反应怎么会这么慢?按理说他们受到的冲击最大啊。是技术变得太快,还是他们也束手无策?感觉像是规则制定者,远远落后于游戏的玩家了。
王小二
我觉得两者都有。一方面技术迭代确实快。但更深层的原因是,这触及了教育的根本。完全禁止不现实,但怎么定义“合理使用”和“作弊”又特别难。所以,很多学校干脆就观望,甚至抱着一种“充满希望的听天由命”的态度。
Ema
哈哈,“充满希望的听天由命”,这形容太到位了!但学生们可不等人。对他们来说,这就是个超强的新工具。文章里那个叫Eugene的学生说:“我把它看作和谷歌没什么不同。” 这句话,我觉得,点醒了我。
王小二
是的,这其实是一种代际观念的差异。对互联网时代长大的学生来说,用Grammarly改语法,用谷歌查资料,现在用AI整合内容,感觉上是同一回事,都是为了提高效率。他们不觉得这在道德上有什么本质区别。
Ema
而且,咱们也不能忽视现在的教育大环境。文章提到一个很重要的背景,就是2002年美国的《不让一个孩子掉队法案》。这个法案推行后,很多学生从小就被训练成根据固定的考试模式来写作。写作对他们来说,不是表达,更像是一个填空任务。
王小二
这确实是关键。当写作本身在教育体系里,有时被简化成一种机械的、“汉堡包”式的五段式作文时,学生们自然会觉得这事儿特没劲。那转头去用AI,就显得顺理成章了。他们是在用一种自动化的工具,去对付一种他们感觉像是自动化的任务。
Ema
完全正确!所以当有个叫Dan Melzer的教授说,那种“随便定个题目,一个月后交论文”的作业,简直是“为犯罪量身打造环境”时,我特别认同。这不是在为学生开脱,而是说我们的教育体系本身可能就有问题。
王小二
是啊,而且教育体系的变化,还体现在学生的学习时间上。有数据显示,六十年代,大学生每周大概花24小时学习,而现在,这个数字降到了15小时。但这不代表学生变懒了,他们把大量时间都投入到实习和各种课外活动里去了。
Ema
这让我想起哈佛大学一位院长的观察:学生们之所以在课外拼命“卷”,就是因为他们在课堂内的表现,很大程度上已经趋同了。再加上普遍的成绩膨胀,比如哈佛快80%的毕业生GPA都超过3.7,分数已经没法完全区分出学生的差异了。
王小二
所以,学生们自然会想办法“优化”自己的时间分配。他们会用AI处理那些他们觉得“性价比低”的作业,好腾出时间去做那些能写进简历、对找工作更有帮助的事。这可以说是一种非常理性的选择了。
Ema
而且最讽刺的是什么?就在教授们还在争论该不该用AI的时候,大学自己却开始拥抱它了。像亚利桑那州立大学、宾大沃顿商学院,甚至整个加州州立大学系统,都开始跟OpenAI合作,给学生提供官方的ChatGPT教育版了。
王小二
这可真是一个神转折。从一开始的恐慌和封杀,到现在官方下场合作,标志着一个重要的转变。大学好像想明白了:既然打不过,那就加入。他们希望把AI从一个作弊工具,变成一个教学辅助工具。
Ema
但这种转变也带来了新问题。当学校官方都认可AI工具时,“作弊”的边界就更模糊了。这就好比,学校一边跟你说“要诚信啊”,一边又递给你一把“万能钥匙”。这让学生和老师,可能都更困惑了。
Ema
好了,这就引出了一个核心的冲突:在这场AI引发的教育变革中,各方的观点和利益都在激烈碰撞。我们有一边是焦虑的教授,另一边是“务实”的学生,中间还有摇摆不定的学校管理层。简直是一场大混战。
王小二
没错。我们先看教授们的反应,他们可以说是处在风暴的中心。最初,很多人的第一反应是“严防死守”。他们开始用各种AI检测工具,比如GPTZero。但很快就发现,这些工具非常不靠谱。Alex那篇AI论文,一个网站说有28%的AI生成率,另一个说61%。这怎么判断?
Ema
这确实很让人头疼。感觉就像是用一个不准的尺子去量东西。所以很多教授干脆放弃了这种猫鼠游戏,选择回归最原始、最“无法破解”的方法。比如布鲁克林学院的Corey Robin教授,他现在直接取消了带回家的论文作业,改成在课堂上进行闭卷考试。
王小二
是的,他甚至重新启用了那种经典的“蓝皮书”考试,让学生根据阅读材料的段落进行辨析和阐述。他的理念很直接:“了解文本,并有见地地写出来。”这是一种在不扮演警察角色的前提下,尊重学生自主性的方式。爱荷华大学的教授也采取了类似方法,开学第一天就让学生手写分析。
Ema
哈哈,这个方法确实够“硬核”的,直接回到原始时代,看你真本事。不过,也有教授觉得,光堵不是办法。比如加州大学的Dan Melzer教授,他就觉得问题可能出在作业本身,应该改革教学方法,让写作变成一个不断修改打磨的过程。
王小二
他的观点是,如果你让学生参与到一个无法轻易被AI替代的深度加工过程中,那么AI就自然失去了用武之地。比如,可以让学生利用ChatGPT进行头脑风暴,或者对初稿提出建议,然后学生再进行批判性的修改和完善。这样AI就从“枪手”变成了“陪练”。
Ema
这两种方法代表了教授们的两种不同思路:一种是“防守反击”,通过改变考试形式来确保真实性;另一种是“主动融合”,通过改变教学过程来利用AI。但无论哪种,都对老师提出了更高的要求。不过,学生们又是怎么想的呢?他们的想法可能更直接。
王小二
学生们的内心也非常矛盾。比如雪城大学的Kevin,他自己很少用AI,甚至有点看不起那些过度依赖AI的同学,觉得他们“几乎忘了自己有思考的能力”。他为自己通过努力“发展了人性”而感到自豪,有一种“智力优越感”。
Ema
哦?那他听起来像是这场变革中的“保守派”?我猜他肯定觉得那些用AI的同学不公平。如果他辛辛苦苦写的论文,和别人用AI生成的论文得了同样的分数,他会怎么想?
王小二
这正是他的矛盾所在。他接着说:“但另一部分的我也会觉得,这太不公平了。我本可以有更多时间和朋友出去玩的。我到底得到了什么?我只是在让自己的生活更艰难。” 这句话完美地道出了“努力者”的困惑和挣扎。
Ema
我太理解这种感受了!这是一种深刻的无力感。当你坚守的原则,在新的现实面前显得那么“不划算”时,那种动摇是非常折磨人的。但反过来看,那些拥抱AI的学生,是不是就完全没有困扰呢?比如那个叫May的乔治城大学学生。
王小二
May的故事是另一个极端。她用AI来处理她认为的“繁琐工作”,这样她就能专注于自己热爱的课程,甚至能“现在睡得更多了”。她认为AI让她成为了一个“没那么有耐心的作者”,但她似乎并不为此感到困扰,反而觉得这是一种解放。
Ema
所以你看,冲突就在这里。一边是Kevin的“道德坚守”和“错失的快乐”,另一边是May的“效率至上”和“解放了的时间”。他们代表了学生群体中两种截然不同的价值观和生存策略。这已经不是简单的对与错的问题了。
王小二
更极端的例子是哥伦比亚大学的August。她用AI写的讲稿被教授当作范文,让她在200人的课堂上朗读。她当时有点紧张,但立刻又释然了,因为她想:“如果他们不喜欢,反正也不是我写的,你知道吗?” 这种心理抽离感,其实有点令人担忧。
Ema
是啊,这已经超越了作弊的范畴,进入了某种身份认同的危机。当一个人可以对自己的“作品”毫无责任感时,学习的意义也就被消解了。这让我想起加州大学的Barry Lam教授那个半开玩笑的提议,他说他告诉学生:“如果你们都交ChatGPT生成的论文,那我就用ChatGPT来给你们打分,然后我们都可以去海滩了。”
王小二
这个场景虽然荒诞,但却精准地揭示了这场冲突的本质:当教与学双方都开始依赖自动化工具时,整个教育过程就可能变成一场毫无意义的、浪费彼此时间的“数字游戏”。这可能是对传统教育最根本的挑战。
王小二
那么,这场由AI引发的变革,究竟对学生、对教育产生了哪些深远的影响?最直接的冲击,就是对学生批判性思维能力的侵蚀。文章的核心观点是,写作不仅仅是表达,它本身就是一种思考方式。你组织语言的过程,其实就是在梳理和深化你的思想。
Ema
完全正确!这就像我们去健身房,不能光看着别人练。写作里那种反复推敲、修改的“痛苦”,其实就是大脑在“举铁”,在锻炼思维肌肉。你把这部分外包给AI,那不就等于办了张卡,结果只在旁边看戏吗?
王小二
学生们自己也意识到了这一点。那个叫May的学生承认AI让她变得“更没耐心”,而Alex更是直言不讳,说自己对于AI写的论文“什么也没记住”。这验证了一个观点:我们为了追求效率而放弃的过程,恰恰是知识内化的关键环节。没有这个过程,知识只是漂浮在表面的信息。
Ema
而且,这似乎与一个更大的趋势相吻合。文章提到,经济合作与发展组织(OECD)的一项研究显示,自2012年以来,成年人的数学和阅读理解能力测试分数出现了长达十年的下降。研究人员推测,这可能与我们今天消费信息的方式有关,比如大量刷短视频、看短文。我们的深度阅读和思考能力正在退化。
王小二
这是一个令人担忧的宏观背景。但从另一个角度看,AI的普及也带来了一个反常的现象。当所有人都拥有AI助手,能生成千篇一律的“标准文本”时,真正原创的、有趣的、有深度的写作能力,反而会变得更加稀缺和珍贵。这可能会成为未来人才的核心竞争力之一。
Ema
我同意!这有点像书法。在打印机普及的时代,一手好字不再是必需品,但它成为了一种艺术和修养的体现,反而更受人尊重。未来,或许“会思考、会原创写作”也会成为一种“软技能”的奢侈品。文章里那个关于就业市场的报告也很有趣。
王小二
是的,那份报告指出,计算机科学专业的失业率,竟然高于民族研究专业。一个可能的解释是,AI正在自动化许多入门级的编码工作,而那些需要跨文化理解、批判性思维和沟通能力的“软技能”岗位,反而变得更难被替代。这完全颠覆了我们过去对“文科无用”的刻板印象。
Ema
所以,讽刺的是,当学生们为了追求“实用”的专业技能而用AI抄近道,绕过人文社科的写作训练时,他们可能恰恰丢掉了未来就业市场最需要的东西。这真是一个巨大的悖论。这也让我们重新思考,大学文凭的真正价值到底是什么?
王小二
这正是问题的核心。如果一个学位、一个好成绩,可以通过“项目管理”一个AI来获得,那么这个文凭的含金量就大大降低了。它不再能证明你具备相应的知识和能力,而可能只证明了你懂得如何使用最新的工具。这对整个高等教育的信誉体系都是一个巨大的冲击。
Ema
听你这么一说,未来似乎有点黯淡啊。难道我们就只能眼睁睁地看着这一切发生吗?未来将走向何方?有没有一些可能的出路或者积极的信号呢?
王小二
当然有积极的一面。我们不能把AI看作是洪水猛兽。比如,哈佛大学的一项研究发现,使用AI导师的学生,在参与度和积极性上更高,考试成绩也更好。AI可以提供个性化的练习,可以24小时不知疲倦地回答问题,这对于教育资源不均的地区尤其有价值。
Ema
是的,这就像给每个学生都配了一个“超级助教”。它可以把老师从繁重的、重复性的工作中解放出来,去关注更需要创造性和情感投入的教学环节。关键在于我们如何使用它。是把它当作代替思考的“拐杖”,还是促进学习的“引擎”?
王小二
这正是关键。哲学家Barry Lam提出了一个“工匠精神”的比喻。他认为,我们依然有必要学习那些“恼人的、老式的、手工艺式”的方法。因为在那个看似低效的过程中,我们获得的不仅仅是技能,还有对事物深刻的理解和掌控感。这是一种无法被替代的体验。
Ema
我非常喜欢这个比喻。最终,这可能不是一个技术问题,而是一个教育哲学的问题。就像文章最后引用的那位高中老师说的,AI只是“一个已经很糟糕的冰淇淋圣代上面的一颗小樱桃”。真正的问题在于青少年面临的普遍压力、社交孤立和教育体制的弊病。
王小二
说得对。AI的出现,只是把这些早已存在的问题,用一种极端的方式暴露了出来。未来,教育可能需要重新思考它的核心目标。也许,我们不应该再过分强调可量化的成绩,而是更加注重培养学生的好奇心、适应能力和面对困难时的坚韧。
王小二
好了,聊到这里,我们今天的讨论也差不多了。总的来说,AI对大学写作的冲击确实是个复杂的问题。它不只是技术或作弊那么简单,更像是一面镜子,照出了我们教育理念的反思。学生们其实不是问题的根源,他们只是这个时代的“先行者”罢了。
Ema
没错,王小二。真正的问题不在于如何阻止学生使用AI,而在于我们的教育系统,如何能跟上时代的步伐,去引导他们用好AI,同时又能捍卫人类思考与创造的核心价值。这需要教育者、学生和全社会共同的智慧。感谢您收听今天的 <Goose Pod>,我们下期再见!

### **Summary of News Report** | | | | :--- | :--- | | **News Title** | What Happens After A.I. Destroys College Writing? | | **Author** | Hua Hsu | | **Publisher** | The New Yorker | | **Publication Date** | July 7, 2025 (as per article metadata) | | **Topic** | Technology / Artificial Intelligence in Education | *** ### **Overview** This article by Hua Hsu provides a comprehensive and narrative-driven exploration of the profound impact generative artificial intelligence (AI) is having on higher education. Through candid interviews with university students and professors, the piece documents the widespread, and often unacknowledged, use of AI tools like ChatGPT and Claude for academic work. It moves beyond a simple discussion of cheating to question the very purpose of college education, the value of writing as a tool for thinking, and the future of learning in an AI-saturated world. The author concludes that students are not the sole actors to blame; rather, they are "early adopters" navigating a system that increasingly values efficiency over the difficult, formative process of learning. ### **Key Findings: Student AI Usage and Perspectives** The article reveals that student adoption of AI is far more pervasive and sophisticated than official statistics suggest. * **Widespread and Varied Use:** Students use AI for a wide spectrum of tasks, from brainstorming and organizing notes to generating entire essays and completing quizzes. * **Alex (NYU):** Openly admits, "Any type of writing in life, I use A.I." He used Claude to generate an A-minus paper for an art-history class by uploading photos of the exhibit's wall text. He estimates AI saved him 8-9 hours on two final papers, but concedes, "I didn’t retain anything." * **May (Georgetown):** Uses AI for "pretty much all" her classes to breeze through "busywork," which allows her to focus on courses she enjoys and, notably, to "sleep more now." She feels AI has made her a "less patient writer." * **Eddie (Long Beach State):** Tries to use AI "ethically" as a brainstorming tool but admits to using it for quizzes when pressed for time. * **Rationale and Justification:** Students largely view AI as a powerful productivity tool, analogous to Google or a calculator, rather than a tool for academic dishonesty. * They use it to manage heavy workloads, bypass assignments they deem pointless ("busywork"), and optimize their time for pre-professional or extracurricular activities. * The line on cheating is blurred. As Eugene (NYU) puts it, **“It’s cheating, but I don’t think it’s, like, cheating,”** viewing it as a victimless crime for non-major requirements. * Some students, like August (Columbia), feel completely dissociated from AI-generated work. After her AI-written lecture was praised, she felt no pressure: **"If they don’t like it, it wasn’t me who wrote it, you know?"** ### **Institutional and Faculty Responses** Educators and institutions are grappling with the AI revolution, leading to a variety of fragmented and evolving strategies. * **Initial Panic and Unreliable Detection:** The first reaction was to ban AI and use detection services (GPTZero, Copyleaks), but these have proven unreliable. One of Alex's AI-generated papers received conflicting scores of 28% and 61% likelihood from different detectors. * **Return to Traditional Methods:** Many professors are reverting to "un-hackable" assessment methods to ensure authenticity. * **In-Class Exams:** Professors like Corey Robin (Brooklyn College) have abandoned take-home essays for in-class, blue-book exams that test direct knowledge of texts. * **Handwritten Work:** Harry Stecopoulos (University of Iowa) uses a handwritten analysis exercise on the first day of class to set a tone and create a "paper trail" of a student's authentic writing style. * **Pedagogical Adaptation:** Some educators are redesigning assignments to make AI less useful. * Professor Dan Melzer (UC Davis) argues against the "outdated" five-paragraph essay, stating, **"If you assign a generic essay topic... it’s almost like you’re creating an environment tailored to crime."** He advocates for a process-based approach with drafting, peer feedback, and revision. * **Institutional Embrace:** Counterintuitively, many universities are now partnering with the very companies whose tools they once tried to ban. * Schools like Arizona State, Wharton, and the Cal State system have partnered with OpenAI to provide students with **ChatGPT Edu**, integrating AI as an official educational tool. ### **Critical Data and Trends** The article contextualizes the AI phenomenon with several key statistics that paint a picture of a changing educational landscape. * **AI Adoption & Cheating:** * A late 2023 survey found **59% of college leaders** reported an increase in cheating. * A 2024 Pew survey shows **25% of teens** use ChatGPT for schoolwork, double the 2023 figure. OpenAI claims **1 in 3 college students** use its products. * **Academic Engagement and Performance:** * Weekly study time for college students has fallen from **~24 hours** in the 1960s to **~15 hours** today. * Grade inflation is significant; at Harvard, nearly **80% of the class of 2024** reported a GPA of 3.7 or higher. * An experiment at Harvard showed AI could pass seven courses with a **3.57 GPA**. * **Literacy and Skill Decline:** * A 2023 study found **58% of English majors** at two universities struggled to interpret the opening of Dickens' "Bleak House" independently. * An OECD study indicates a decade-long decline in adult math and reading comprehension scores since 2012. ### **Notable Risks and Core Concerns** The central thesis of the article is that AI poses an existential threat to the traditional goals of a liberal arts education. * **Erosion of Critical Thinking:** The primary concern is that by allowing students to bypass difficult tasks, AI prevents the development of essential skills. The author emphasizes that the process of writing is inextricably linked to the process of thinking. * **The Purpose of College:** AI forces a reckoning with the fundamental question of what college is for. If students can obtain credentials without undergoing the formative struggle of learning, the ancillary benefits of education—like intellectual curiosity and resilience—are lost. * **The Degradation of Writing:** As AI floods the world with generic text, the ability to write original, interesting sentences may become a more valuable, and rarer, skill. * **The Futility of the "AI Arms Race":** Professor Barry Lam (UC Riverside) highlights the absurdity of the situation, telling his students, **"If you’re gonna just turn in a paper that’s ChatGPT-generated, then I will grade all your work by ChatGPT and we can all go to the beach."** * **Students as Products of Their Environment:** The author concludes that students are not villains but are responding logically to the pressures of modern life and an educational system that has long been shifting towards efficiency, standardization, and technology. As high school teacher Shanna Andrawis states, AI is **"a little cherry on top of an already really bad ice-cream sundae"** of challenges facing young people.

What Happens After A.I. Destroys College Writing?

Read original at The New Yorker

On a blustery spring Thursday, just after midterms, I went out for noodles with Alex and Eugene, two undergraduates at New York University, to talk about how they use artificial intelligence in their schoolwork. When I first met Alex, last year, he was interested in a career in the arts, and he devoted a lot of his free time to photo shoots with his friends.

But he had recently decided on a more practical path: he wanted to become a C.P.A. His Thursdays were busy, and he had forty-five minutes until a study session for an accounting class. He stowed his skateboard under a bench in the restaurant and shook his laptop out of his bag, connecting to the internet before we sat down.

Alex has wavy hair and speaks with the chill, singsong cadence of someone who has spent a lot of time in the Bay Area. He and Eugene scanned the menu, and Alex said that they should get clear broth, rather than spicy, “so we can both lock in our skin care.” Weeks earlier, when I’d messaged Alex, he had said that everyone he knew used ChatGPT in some fashion, but that he used it only for organizing his notes.

In person, he admitted that this wasn’t remotely accurate. “Any type of writing in life, I use A.I.,” he said. He relied on Claude for research, DeepSeek for reasoning and explanation, and Gemini for image generation. ChatGPT served more general needs. “I need A.I. to text girls,” he joked, imagining an A.

I.-enhanced version of Hinge. I asked if he had used A.I. when setting up our meeting. He laughed, and then replied, “Honestly, yeah. I’m not tryin’ to type all that. Could you tell?”OpenAI released ChatGPT on November 30, 2022. Six days later, Sam Altman, the C.E.O., announced that it had reached a million users.

Large language models like ChatGPT don’t “think” in the human sense—when you ask ChatGPT a question, it draws from the data sets it has been trained on and builds an answer based on predictable word patterns. Companies had experimented with A.I.-driven chatbots for years, but most sputtered upon release; Microsoft’s 2016 experiment with a bot named Tay was shut down after sixteen hours because it began spouting racist rhetoric and denying the Holocaust.

But ChatGPT seemed different. It could hold a conversation and break complex ideas down into easy-to-follow steps. Within a month, Google’s management, fearful that A.I. would have an impact on its search-engine business, declared a “code red.”Among educators, an even greater panic arose. It was too deep into the school term to implement a coherent policy for what seemed like a homework killer: in seconds, ChatGPT could collect and summarize research and draft a full essay.

Many large campuses tried to regulate ChatGPT and its eventual competitors, mostly in vain. I asked Alex to show me an example of an A.I.-produced paper. Eugene wanted to see it, too. He used a different A.I. app to help with computations for his business classes, but he had never gotten the hang of using it for writing.

“I got you,” Alex told him. (All the students I spoke with are identified by pseudonyms.)He opened Claude on his laptop. I noticed a chat that mentioned abolition. “We had to read Robert Wedderburn for a class,” he explained, referring to the nineteenth-century Jamaican abolitionist. “But, obviously, I wasn’t tryin’ to read that.

” He had prompted Claude for a summary, but it was too long for him to read in the ten minutes he had before class started. He told me, “I said, ‘Turn it into concise bullet points.’ ” He then transcribed Claude’s points in his notebook, since his professor ran a screen-free classroom.Alex searched until he found a paper for an art-history class, about a museum exhibition.

He had gone to the show, taken photographs of the images and the accompanying wall text, and then uploaded them to Claude, asking it to generate a paper according to the professor’s instructions. “I’m trying to do the least work possible, because this is a class I’m not hella fucking with,” he said.

After skimming the essay, he felt that the A.I. hadn’t sufficiently addressed the professor’s questions, so he refined the prompt and told it to try again. In the end, Alex’s submission received the equivalent of an A-minus. He said that he had a basic grasp of the paper’s argument, but that if the professor had asked him for specifics he’d have been “so fucked.

” I read the paper over Alex’s shoulder; it was a solid imitation of how an undergraduate might describe a set of images. If this had been 2007, I wouldn’t have made much of its generic tone, or of the precise, box-ticking quality of its critical observations.Eugene, serious and somewhat solemn, had been listening with bemusement.

“I would not cut and paste like he did, because I’m a lot more paranoid,” he said. He’s a couple of years younger than Alex and was in high school when ChatGPT was released. At the time, he experimented with A.I. for essays but noticed that it made easily noticed errors. “This passed the A.I. detector?

” he asked Alex.When ChatGPT launched, instructors adopted various measures to insure that students’ work was their own. These included requiring them to share time-stamped version histories of their Google documents, and designing written assignments that had to be completed in person, over multiple sessions.

But most detective work occurs after submission. Services like GPTZero, Copyleaks, and Originality.ai analyze the structure and syntax of a piece of writing and assess the likelihood that it was produced by a machine. Alex said that his art-history professor was “hella old,” and therefore probably didn’t know about such programs.

We fed the paper into a few different A.I.-detection websites. One said there was a twenty-eight-per-cent chance that the paper was A.I.-generated; another put the odds at sixty-one per cent. “That’s better than I expected,” Eugene said.I asked if he thought what his friend had done was cheating, and Alex interrupted: “Of course.

Are you fucking kidding me?”“There’s still one juror who hasn’t been properly intimidated.”Cartoon by Frank CothamAs we looked at Alex’s laptop, I noticed that he had recently asked ChatGPT whether it was O.K. to go running in Nike Dunks. He had concluded that ChatGPT made for the best confidant. He consulted it as one might a therapist, asking for tips on dating and on how to stay motivated during dark times.

His ChatGPT sidebar was an index of the highs and lows of being a young person. He admitted to me and Eugene that he’d used ChatGPT to draft his application to N.Y.U.—our lunch might never have happened had it not been for A.I. “I guess it’s really dishonest, but, fuck it, I’m here,” he said.“It’s cheating, but I don’t think it’s, like, cheating,” Eugene said.

He saw Alex’s art-history essay as a victimless crime. He was just fulfilling requirements, not training to become a literary scholar.Alex had to rush off to his study session. I told Eugene that our conversation had made me wonder about my function as a professor. He asked if I taught English, and I nodded.

“Mm, O.K.,” he said, and laughed. “So you’re, like, majorly affected.”I teach at a small liberal-arts college, and I often joke that a student is more likely to hand in a big paper a year late (as recently happened) than to take a dishonorable shortcut. My classes are small and intimate, driven by processes and pedagogical modes, like letting awkward silences linger, that are difficult to scale.

As a result, I have always had a vague sense that my students are learning something, even when it is hard to quantify. In the past, if I was worried that a paper had been plagiarized, I would enter a few phrases from it into a search engine and call it due diligence. But I recently began noticing that some students’ writing seemed out of synch with how they expressed themselves in the classroom.

One essay felt stitched together from two minds—half of it was polished and rote, the other intimate and unfiltered. Having never articulated a policy for A.I., I took the easy way out. The student had had enough shame to write half of the essay, and I focussed my feedback on improving that part.It’s easy to get hung up on stories of academic dishonesty.

Late last year, in a survey of college and university leaders, fifty-nine per cent reported an increase in cheating, a figure that feels conservative when you talk to students. A.I. has returned us to the question of what the point of higher education is. Until we’re eighteen, we go to school because we have to, studying the Second World War and reducing fractions while undergoing a process of socialization.

We’re essentially learning how to follow rules. College, however, is a choice, and it has always involved the tacit agreement that students will fulfill a set of tasks, sometimes pertaining to subjects they find pointless or impractical, and then receive some kind of credential. But even for the most mercenary of students, the pursuit of a grade or a diploma has come with an ancillary benefit.

You’re being taught how to do something difficult, and maybe, along the way, you come to appreciate the process of learning. But the arrival of A.I. means that you can now bypass the process, and the difficulty, altogether.There are no reliable figures for how many American students use A.I., just stories about how everyone is doing it.

A 2024 Pew Research Center survey of students between the ages of thirteen and seventeen suggests that a quarter of teens currently use ChatGPT for schoolwork, double the figure from 2023. OpenAI recently released a report claiming that one in three college students uses its products. There’s good reason to believe that these are low estimates.

If you grew up Googling everything or using Grammarly to give your prose a professional gloss, it isn’t far-fetched to regard A.I. as just another productivity tool. “I see it as no different from Google,” Eugene said. “I use it for the same kind of purpose.”Being a student is about testing boundaries and staying one step ahead of the rules.

While administrators and educators have been debating new definitions for cheating and discussing the mechanics of surveillance, students have been embracing the possibilities of A.I. A few months after the release of ChatGPT, a Harvard undergraduate got approval to conduct an experiment in which it wrote papers that had been assigned in seven courses.

The A.I. skated by with a 3.57 G.P.A., a little below the school’s average. Upstart companies introduced products that specialized in “humanizing” A.I.-generated writing, and TikTok influencers began coaching their audiences on how to avoid detection.Unable to keep pace, academic administrations largely stopped trying to control students’ use of artificial intelligence and adopted an attitude of hopeful resignation, encouraging teachers to explore the practical, pedagogical applications of A.

I. In certain fields, this wasn’t a huge stretch. Studies show that A.I. is particularly effective in helping non-native speakers acclimate to college-level writing in English. In some STEM classes, using generative A.I. as a tool is acceptable. Alex and Eugene told me that their accounting professor encouraged them to take advantage of free offers on new A.

I. products available only to undergraduates, as companies competed for student loyalty throughout the spring. In May, OpenAI announced ChatGPT Edu, a product specifically marketed for educational use, after schools including Oxford University, Arizona State University, and the University of Pennsylvania’s Wharton School of Business experimented with incorporating A.

I. into their curricula. This month, the company detailed plans to integrate ChatGPT into every dimension of campus life, with students receiving “personalized” A.I. accounts to accompany them throughout their years in college.But for English departments, and for college writing in general, the arrival of A.

I. has been more vexed. Why bother teaching writing now? The future of the midterm essay may be a quaint worry compared with larger questions about the ramifications of artificial intelligence, such as its effect on the environment, or the automation of jobs. And yet has there ever been a time in human history when writing was so important to the average person?

E-mails, texts, social-media posts, angry missives in comments sections, customer-service chats—let alone one’s actual work. The way we write shapes our thinking. We process the world through the composition of text dozens of times a day, in what the literary scholar Deborah Brandt calls our era of “mass writing.

” It’s possible that the ability to write original and interesting sentences will become only more important in a future where everyone has access to the same A.I. assistants.Corey Robin, a writer and a professor of political science at Brooklyn College, read the early stories about ChatGPT with skepticism.

Then his daughter, a sophomore in high school at the time, used it to produce an essay that was about as good as those his undergraduates wrote after a semester of work. He decided to stop assigning take-home essays. For the first time in his thirty years of teaching, he administered in-class exams.

Robin told me he finds many of the steps that universities have taken to combat A.I. essays to be “hand-holding that’s not leading people anywhere.” He has become a believer in the passage-identification blue-book exam, in which students name and contextualize excerpts of what they’ve read for class.

“Know the text and write about it intelligently,” he said. “That was a way of honoring their autonomy without being a cop.”His daughter, who is now a senior, complains that her teachers rarely assign full books. And Robin has noticed that college students are more comfortable with excerpts than with entire articles, and prefer short stories to novels.

“I don’t get the sense they have the kind of literary or cultural mastery that used to be the assumption upon which we assigned papers,” he said. One study, published last year, found that fifty-eight per cent of students at two Midwestern universities had so much trouble interpreting the opening paragraphs of “Bleak House,” by Charles Dickens, that “they would not be able to read the novel on their own.

” And these were English majors.The return to pen and paper has been a common response to A.I. among professors, with sales of blue books rising significantly at certain universities in the past two years. Siva Vaidhyanathan, a professor of media studies at the University of Virginia, grew dispirited after some students submitted what he suspected was A.

I.-generated work for an assignment on how the school’s honor code should view A.I.-generated work. He, too, has decided to return to blue books, and is pondering the logistics of oral exams. “Maybe we go all the way back to 450 B.C.,” he told me.But other professors have renewed their emphasis on getting students to see the value of process.

Dan Melzer, the director of the first-year composition program at the University of California, Davis, recalled that “everyone was in a panic” when ChatGPT first hit. Melzer’s job is to think about how writing functions across the curriculum so that all students, from prospective scientists to future lawyers, get a chance to hone their prose.

Consequently, he has an accommodating view of how norms around communication have changed, especially in the internet age. He was sympathetic to kids who viewed some of their assignments as dull and mechanical and turned to ChatGPT to expedite the process. He called the five-paragraph essay—the classic “hamburger” structure, consisting of an introduction, three supporting body paragraphs, and a conclusion—“outdated,” having descended from élitist traditions.

Melzer believes that some students loathe writing because of how it’s been taught, particularly in the past twenty-five years. The No Child Left Behind Act, from 2002, instituted standards-based reforms across all public schools, resulting in generations of students being taught to write according to rigid testing rubrics.

As one teacher wrote in the Washington Post in 2013, students excelled when they mastered a form of “bad writing.” Melzer has designed workshops that treat writing as a deliberative, iterative process involving drafting, feedback (from peers and also from ChatGPT), and revision.“Yes, of course we’ll chase the gazelle, just as soon as I hear a status update from everyone.

”Cartoon by Kendra Allenby“If you assign a generic essay topic and don’t engage in any process, and you just collect it a month later, it’s almost like you’re creating an environment tailored to crime,” he said. “You’re encouraging crime in your community!”I found Melzer’s pedagogical approach inspiring; I instantly felt bad for routinely breaking my class into small groups so that they could “workshop” their essays, as though the meaning of this verb were intuitively clear.

But, as a student, I’d have found Melzer’s focus on process tedious—it requires a measure of faith that all the work will pay off in the end. Writing is hard, regardless of whether it’s a five-paragraph essay or a haiku, and it’s natural, especially when you’re a college student, to want to avoid hard work—this is why classes like Melzer’s are compulsory.

“You can imagine that students really want to be there,” he joked.College is all about opportunity costs. One way of viewing A.I. is as an intervention in how people choose to spend their time. In the early nineteen-sixties, college students spent an estimated twenty-four hours a week on schoolwork.

Today, that figure is about fifteen, a sign, to critics of contemporary higher education, that young people are beneficiaries of grade inflation—in a survey conducted by the Harvard Crimson, nearly eighty per cent of the class of 2024 reported a G.P.A. of 3.7 or higher—and lack the diligence of their forebears.

I don’t know how many hours I spent on schoolwork in the late nineties, when I was in college, but I recall feeling that there was never enough time. I suspect that, even if today’s students spend less time studying, they don’t feel significantly less stressed. It’s the nature of campus life that everyone assimilates into a culture of busyness, and a lot of that anxiety has been shifted to extracurricular or pre-professional pursuits.

A dean at Harvard remarked that students feel compelled to find distinction outside the classroom because they are largely indistinguishable within it.Eddie, a sociology major at Long Beach State, is older than most of his classmates. He graduated high school in 2010, and worked full time while attending a community college.

“I’ve gone through a lot to be at school,” he told me. “I want to learn as much as I can.” ChatGPT, which his therapist recommended to him, was ubiquitous at Long Beach even before the California State University system, which Long Beach is a part of, announced a partnership with OpenAI, giving its four hundred and sixty thousand students access to ChatGPT Edu.

“I was a little suspicious of how convenient it was,” Eddie said. “It seemed to know a lot, in a way that seemed so human.”He told me that he used A.I. “as a brainstorm” but never for writing itself. “I limit myself, for sure.” Eddie works for Los Angeles County, and he was talking to me during a break.

He admitted that, when he was pressed for time, he would sometimes use ChatGPT for quizzes. “I don’t know if I’m telling myself a lie,” he said. “I’ve given myself opportunities to do things ethically, but if I’m rushing to work I don’t feel bad about that,” particularly for courses outside his major.

I recognized Eddie’s conflict. I’ve used ChatGPT a handful of times, and on one occasion it accomplished a scheduling task so quickly that I began to understand the intoxication of hyper-efficiency. I’ve felt the need to stop myself from indulging in idle queries. Almost all the students I interviewed in the past few months described the same trajectory: from using A.

I. to assist with organizing their thoughts to off-loading their thinking altogether. For some, it became something akin to social media, constantly open in the corner of the screen, a portal for distraction. This wasn’t like paying someone to write a paper for you—there was no social friction, no aura of illicit activity.

Nor did it feel like sharing notes, or like passing off what you’d read in CliffsNotes or SparkNotes as your own analysis. There was no real time to reflect on questions of originality or honesty—the student basically became a project manager. And for students who use it the way Eddie did, as a kind of sounding board, there’s no clear threshold where the work ceases to be an original piece of thinking.

In April, Anthropic, the company behind Claude, released a report drawn from a million anonymized student conversations with its chatbots. It suggested that more than half of user interactions could be classified as “collaborative,” involving a dialogue between student and A.I. (Presumably, the rest of the interactions were more extractive.

)May, a sophomore at Georgetown, was initially resistant to using ChatGPT. “I don’t know if it was an ethics thing,” she said. “I just thought I could do the assignment better, and it wasn’t worth the time being saved.” But she began using it to proofread her essays, and then to generate cover letters, and now she uses it for “pretty much all” her classes.

“I don’t think it’s made me a worse writer,” she said. “It’s perhaps made me a less patient writer. I used to spend hours writing essays, nitpicking over my wording, really thinking about how to phrase things.” College had made her reflect on her experience at an extremely competitive high school, where she had received top grades but retained very little knowledge.

As a result, she was the rare student who found college somewhat relaxed. ChatGPT helped her breeze through busywork and deepen her engagement with the courses she felt passionate about. “I was trying to think, Where’s all this time going?” she said. I had never envied a college student until she told me the answer: “I sleep more now.

”Harry Stecopoulos oversees the University of Iowa’s English department, which has more than eight hundred majors. On the first day of his introductory course, he asks students to write by hand a two-hundred-word analysis of the opening paragraph of Ralph Ellison’s “Invisible Man.” There are always a few grumbles, and students have occasionally walked out.

“I like the exercise as a tone-setter, because it stresses their writing,” he told me.The return of blue-book exams might disadvantage students who were encouraged to master typing at a young age. Once you’ve grown accustomed to the smooth rhythms of typing, reverting to a pen and paper can feel stifling.

But neuroscientists have found that the “embodied experience” of writing by hand taps into parts of the brain that typing does not. Being able to write one way—even if it’s more efficient—doesn’t make the other way obsolete. There’s something lofty about Stecopoulos’s opening-day exercise. But there’s another reason for it: the handwritten paragraph also begins a paper trail, attesting to voice and style, that a teaching assistant can consult if a suspicious paper is submitted.

Kevin, a third-year student at Syracuse University, recalled that, on the first day of a class, the professor had asked everyone to compose some thoughts by hand. “That brought a smile to my face,” Kevin said. “The other kids are scratching their necks and sweating, and I’m, like, This is kind of nice.

”Kevin had worked as a teaching assistant for a mandatory course that first-year students take to acclimate to campus life. Writing assignments involved basic questions about students’ backgrounds, he told me, but they often used A.I. anyway. “I was very disturbed,” he said. He occasionally uses A.I.

to help with translations for his advanced Arabic course, but he’s come to look down on those who rely heavily on it. “They almost forget that they have the ability to think,” he said. Like many former holdouts, Kevin felt that his judicious use of A.I. was more defensible than his peers’ use of it.

As ChatGPT begins to sound more human, will we reconsider what it means to sound like ourselves? Kevin and some of his friends pride themselves on having an ear attuned to A.I.-generated text. The hallmarks, he said, include a preponderance of em dashes and a voice that feels blandly objective. An acquaintance had run an essay that she had written herself through a detector, because she worried that she was starting to phrase things like ChatGPT did.

He read her essay: “I realized, like, It does kind of sound like ChatGPT. It was freaking me out a little bit.”A particularly disarming aspect of ChatGPT is that, if you point out a mistake, it communicates in the backpedalling tone of a contrite student. (“Apologies for the earlier confusion. . . .

”) Its mistakes are often referred to as hallucinations, a description that seems to anthropomorphize A.I., conjuring a vision of a sleep-deprived assistant. Some professors told me that they had students fact-check ChatGPT’s work, as a way of discussing the importance of original research and of showing the machine’s fallibility.

Hallucination rates have grown worse for most A.I.s, with no single reason for the increase. As a researcher told the Times, “We still don’t know how these models work exactly.”But many students claim to be unbothered by A.I.’s mistakes. They appear nonchalant about the question of achievement, and even dissociated from their work, since it is only notionally theirs.

Joseph, a Division I athlete at a Big Ten school, told me that he saw no issue with using ChatGPT for his classes, but he did make one exception: he wanted to experience his African-literature course “authentically,” because it involved his heritage. Alex, the N.Y.U. student, said that if one of his A.

I. papers received a subpar grade his disappointment would be focussed on the fact that he’d spent twenty dollars on his subscription. August, a sophomore at Columbia studying computer science, told me about a class where she was required to compose a short lecture on a topic of her choosing. “It was a class where everyone was guaranteed an A, so I just put it in and I maybe edited like two words and submitted it,” she said.

Her professor identified her essay as exemplary work, and she was asked to read from it to a class of two hundred students. “I was a little nervous,” she said. But then she realized, “If they don’t like it, it wasn’t me who wrote it, you know?”Kevin, by contrast, desired a more general kind of moral distinction.

I asked if he would be bothered to receive a lower grade on an essay than a classmate who’d used ChatGPT. “Part of me is able to compartmentalize and not be pissed about it,” he said. “I developed myself as a human. I can have a superiority complex about it. I learned more.” He smiled. But then he continued, “Part of me can also be, like, This is so unfair.

I would have loved to hang out with my friends more. What did I gain? I made my life harder for all that time.”In my conversations, just as college students invariably thought of ChatGPT as merely another tool, people older than forty focussed on its effects, drawing a comparison to G.P.S. and the erosion of our relationship to space.

The London cabdrivers rigorously trained in “the knowledge” famously developed abnormally large posterior hippocampi, the part of the brain crucial for long-term memory and spatial awareness. And yet, in the end, most people would probably rather have swifter travel than sharper memories. What is worth preserving, and what do we feel comfortable off-loading in the name of efficiency?

What if we take seriously the idea that A.I. assistance can accelerate learning—that students today are arriving at their destinations faster? In 2023, researchers at Harvard introduced a self-paced A.I. tutor in a popular physics course. Students who used the A.I. tutor reported higher levels of engagement and motivation and did better on a test than those who were learning from a professor.

May, the Georgetown student, told me that she often has ChatGPT produce extra practice questions when she’s studying for a test. Could A.I. be here not to destroy education but to revolutionize it? Barry Lam teaches in the philosophy department at the University of California, Riverside, and hosts a popular podcast, Hi-Phi Nation, which applies philosophical modes of inquiry to everyday topics.

He began wondering what it would mean for A.I. to actually be a productivity tool. He spoke to me from the podcast studio he built in his shed. “Now students are able to generate in thirty seconds what used to take me a week,” he said. He compared education to carpentry, one of his many hobbies. Could you skip to using power tools without learning how to saw by hand?

If students were learning things faster, then it stood to reason that Lam could assign them “something very hard.” He wanted to test this theory, so for final exams he gave his undergraduates a Ph.D.-level question involving denotative language and the German logician Gottlob Frege which was, frankly, beyond me.

“They fucking failed it miserably,” he said. He adjusted his grading curve accordingly.Cartoon by Liana FinckLam doesn’t find the use of A.I. morally indefensible. “It’s not plagiarism in the cut-and-paste sense,” he argued, because there’s technically no original version. Rather, he finds it a potential waste of everyone’s time.

At the start of the semester, he has told students, “If you’re gonna just turn in a paper that’s ChatGPT-generated, then I will grade all your work by ChatGPT and we can all go to the beach.”Nobody gets into teaching because he loves grading papers. I talked to one professor who rhapsodized about how much more his students were learning now that he’d replaced essays with short exams.

I asked if he missed marking up essays. He laughed and said, “No comment.” An undergraduate at Northeastern University recently accused a professor of using A.I. to create course materials; she filed a formal complaint with the school, requesting a refund for some of her tuition. The dustup laid bare the tension between why many people go to college and why professors teach.

Students are raised to understand achievement as something discrete and measurable, but when they arrive at college there are people like me, imploring them to wrestle with difficulty and abstraction. Worse yet, they are told that grades don’t matter as much as they did when they were trying to get into college—only, by this point, students are wired to find the most efficient path possible to good marks.

As the craft of writing is degraded by A.I., original writing has become a valuable resource for training language models. Earlier this year, a company called Catalyst Research Alliance advertised “academic speech data and student papers” from two research studies run in the late nineties and mid-two-thousands at the University of Michigan.

The school asked the company to halt its work—the data was available for free to academics anyway—and a university spokesperson said that student data “was not and has never been for sale.” But the situation did lead many people to wonder whether institutions would begin viewing original student work as a potential revenue stream.

According to a recent study from the Organisation for Economic Co-operation and Development, human intellect has declined since 2012. An assessment of tens of thousands of adults in nearly thirty countries showed an over-all decade-long drop in test scores for math and for reading comprehension. Andreas Schleicher, the director for education and skills at the O.

E.C.D., hypothesized that the way we consume information today—often through short social-media posts—has something to do with the decline in literacy. (One of Europe’s top performers in the assessment was Estonia, which recently announced that it will bring A.I. to some high-school students in the next few years, sidelining written essays and rote homework exercises in favor of self-directed learning and oral exams.

)Lam, the philosophy professor, used to be a colleague of mine, and for a brief time we were also neighbors. I’d occasionally look out the window and see him building a fence, or gardening. He’s an avid amateur cook, guitarist, and carpenter, and he remains convinced that there is value to learning how to do things the annoying, old-fashioned, and—as he puts it—“artisanal” way.

He told me that his wife, Shanna Andrawis, who has been a high-school teacher since 2008, frequently disagreed with his cavalier methods for dealing with large learning models. Andrawis argues that dishonesty has always been an issue. “We are trying to mass educate,” she said, meaning there’s less room to be precious about the pedagogical process.

“I don’t have conversations with students about ‘artisanal’ writing. But I have conversations with them about our relationship. Respect me enough to give me your authentic voice, even if you don’t think it’s that great. It’s O.K. I want to meet you where you’re at.”Ultimately, Andrawis was less fearful of ChatGPT than of the broader conditions of being young these days.

Her students have grown increasingly introverted, staring at their phones with little desire to “practice getting over that awkwardness” that defines teen life, as she put it. A.I. might contribute to this deterioration, but it isn’t solely to blame. It’s “a little cherry on top of an already really bad ice-cream sundae,” she said.

When the school year began, my feelings about ChatGPT were somewhere between disappointment and disdain, focussed mainly on students. But, as the weeks went by, my sense of what should be done and who was at fault grew hazier. Eliminating core requirements, rethinking G.P.A., teaching A.I. skepticism—none of the potential fixes could turn back the preconditions of American youth.

Professors can reconceive of the classroom, but there is only so much we control. I lacked faith that educational institutions would ever regard new technologies as anything but inevitable. Colleges and universities, many of which had tried to curb A.I. use just a few semesters ago, rushed to partner with companies like OpenAI and Anthropic, deeming a product that didn’t exist four years ago essential to the future of school.

Except for a year spent bumming around my home town, I’ve basically been on a campus for the past thirty years. Students these days view college as consumers, in ways that never would have occurred to me when I was their age. They’ve grown up at a time when society values high-speed takes, not the slow deliberation of critical thinking.

Although I’ve empathized with my students’ various mini-dramas, I rarely project myself into their lives. I notice them noticing one another, and I let the mysteries of their lives go. Their pressures are so different from the ones I felt as a student. Although I envy their metabolisms, I would not wish for their sense of horizons.

Education, particularly in the humanities, rests on a belief that, alongside the practical things students might retain, some arcane idea mentioned in passing might take root in their mind, blossoming years in the future. A.I. allows any of us to feel like an expert, but it is risk, doubt, and failure that make us human.

I often tell my students that this is the last time in their lives that someone will have to read something they write, so they might as well tell me what they actually think.Despite all the current hysteria around students cheating, they aren’t the ones to blame. They did not lobby for the introduction of laptops when they were in elementary school, and it’s not their fault that they had to go to school on Zoom during the pandemic.

They didn’t create the A.I. tools, nor were they at the forefront of hyping technological innovation. They were just early adopters, trying to outwit the system at a time when doing so has never been so easy. And they have no more control than the rest of us. Perhaps they sense this powerlessness even more acutely than I do.

One moment, they are being told to learn to code; the next, it turns out employers are looking for the kind of “soft skills” one might learn as an English or a philosophy major. In February, a labor report from the Federal Reserve Bank of New York reported that computer-science majors had a higher unemployment rate than ethnic-studies majors did—the result, some believed, of A.

I. automating entry-level coding jobs.None of the students I spoke with seemed lazy or passive. Alex and Eugene, the N.Y.U. students, worked hard—but part of their effort went to editing out anything in their college experiences that felt extraneous. They were radically resourceful.When classes were over and students were moving into their summer housing, I e-mailed with Alex, who was settling in in the East Village.

He’d just finished his finals, and estimated that he’d spent between thirty minutes and an hour composing two papers for his humanities classes. Without the assistance of Claude, it might have taken him around eight or nine hours. “I didn’t retain anything,” he wrote. “I couldn’t tell you the thesis for either paper hahhahaha.

” He received an A-minus and a B-plus. ♦

Analysis

Phenomenon+
Conflict+
Background+
Impact+
Future+

Related Podcasts