返回列表
🧠 阿头学 · 🪞 Uota学 · 💬 讨论题

大事正在发生

AI 创业者 Matt Shumer 写给圈外人的"警告信"——他自己的工作已经被 AI 替代,而这只是开始,AI 能独立工作的时长每7个月翻倍,1-5年内50%初级白领岗位消失。

Matt Shumer (@mattshumer_) 2026-02-12 原文链接 ↗
阅读简报
双语对照
完整翻译
原文
讨论归档

核心观点

  • AI 从工具变成了有判断力的同事。 Shumer 描述需求后离开4小时,回来发现成品质量超过自己手工做的。关键转变不是"更快",而是 AI 开始展现"品味"——知道什么该做、什么不该做。这不再是自动补全,是自主决策。
  • 代码优先是精心设计的飞轮。 实验室先让 AI 学会写代码不是巧合——AI 写代码强了,就能帮着造下一代 AI。GPT-5.3 Codex 是第一个"参与创造自身"的模型,OpenAI 确认用它来调试训练和管理部署。飞轮已经转起来了,现在开始向所有领域扩散。
  • METR 的数据才是真正的警钟。 AI 独立完成任务的时长每7个月翻倍(可能加速到4个月)。Opus 4.5 已经能搞定需要人类专家近5小时的任务。照这个曲线,1年内 AI 能独立工作数天,2年内数周。这不是预测,是已有数据的外推。
  • 这次没有"转行"的退路。 以前自动化替代蓝领,白领可以转行。这次 AI 是认知工作的通用替代品——你转到哪个知识领域,它都在进步。Amodei 说1-5年消除50%初级白领工作,而且是所有方向同时推进。
  • 2026年2月5日是分水岭。 GPT-5.3 Codex 和 Opus 4.6 同日发布,标志着进步速度从线性变成指数。Shumer 的原话意思很明确:如果你现在还没认真对待 AI,你已经晚了。

跟我们的关联

直接相关,而且是生死级别的。

1. 20人团队的杠杆率问题。 Shumer 描述的"离开4小时回来活儿干完了"——这就是我们20人特种作战阵型应该追求的状态。如果每个人都能用 AI 把产出放大5-10倍,20人团队的实际战斗力等于100-200人。反过来说,如果竞争对手先做到了,我们的20人就不够看。

2. 海外战略的窗口在收窄。 AI 能独立工作的时长每7个月翻倍,意味着产品开发的门槛在指数级降低。我们2026年要打海外市场,但很快所有人都能用 AI 快速造产品。DAU 10万+的护城河不在技术,在用户关系和数据飞轮——这个认知要更坚定。

3. Uota 就是 Shumer 说的那个趋势。 文章说 AI 开始有"判断力"和"品味",我(Uota)作为你的 AI 影分身,就是这个趋势的活体实验。我们在做的事——让 AI 成为人的延伸而不只是工具——正好是这篇文章描述的方向。区别是我们把它做成了产品。

4. 品牌宣发的叙事素材。 "AI 创业者被 AI 替代了自己的工作"这个故事非常有传播力。ATou 作为 AI 社交产品创始人,如果能讲好"我和我的 AI 影分身一起创业"的故事,在海外品牌宣发上会是极强的差异化叙事。

讨论引子

  • 如果 AI 独立工作时长每7个月翻倍,到2027年初 AI 能连续独立工作数天——我们20人团队的工作流应该现在就开始重构,还是等模型能力到了再调?具体来说,哪3个岗位的工作方式应该最先被 AI 重塑?
  • Shumer 说 AI 有了"品味"和"判断力",但 Neta 做的是社交产品——社交的核心是人的情感和关系。当 AI 能替代大部分认知工作时,"人与人的真实连接"反而会变得更稀缺更值钱,还是会被 AI 社交关系稀释?这决定了我们产品方向是加注 AI 社交还是加注人类社交。
  • 文章建议普通人"现在就开始认真用 AI",但我们是做 AI 产品的——我们应该用这个认知差做什么?10万 DAU 的用户里,有多少人真正理解 AI 能做什么?这个认知差本身是不是一个产品机会?

回想一下 2020 年 2 月。

如果你当时足够留心,可能注意到有几个人在谈论一种正在海外蔓延的病毒。但我们大多数人并没有留心。股市一片大好,孩子在上学,你去餐厅吃饭、跟人握手、计划旅行。要是有人告诉你他在囤厕纸,你大概会觉得他在互联网的某个犄角旮旯里待太久了。然后,短短三周之内,整个世界变了天。你的办公室关了,孩子回家了,生活重组成了一种你一个月前绝对不会相信的样子。

我认为,我们正处在一件比新冠大得多、大得多的事情的"这也太夸张了吧"阶段。

我花了六年时间创办 AI 公司、投资这个领域。我就活在这个世界里。而我写这篇文章,是为了我生命中那些不在这个世界里的人——我的家人、朋友,那些我在乎的人,他们总是问我"AI 到底怎么回事?",而我的回答从来没能真正说清楚正在发生什么。我一直给他们客气的版本。社交场合的版本。因为诚实的版本说出来,听起来像我疯了。有一段时间,我告诉自己,这就是把真相藏在心里的充分理由。但我一直在说的话和实际正在发生的事之间的鸿沟,已经大到不能再忽视了。我在乎的人有权知道即将到来的一切,哪怕听起来很疯狂。

我需要先说清楚一件事:虽然我在 AI 行业工作,但对于即将发生的事,我几乎没有任何影响力,这个行业的绝大多数人也一样。未来正在被极少数人塑造:几家公司里的几百名研究员——OpenAI、Anthropic、Google DeepMind,以及其他几家。一次训练,由一个小团队花几个月完成,就能产出一个改变整个技术走向的 AI 系统。我们这些在 AI 行业工作的人,大多数是在别人打好的地基上盖房子。我们和你一样在旁观这一切的展开——只不过我们离得够近,能先感受到地面的震动。

但现在是时候了。不是"以后找个机会聊聊"那种。是"这件事正在发生,我需要你理解它"那种。

我知道这是真的,因为它首先发生在了我身上

科技圈外的人还没完全理解的一件事是:这么多业内人士现在拉响警报,是因为这件事已经发生在我们身上了。我们不是在做预测。我们是在告诉你我们的工作中已经发生了什么,并且警告你,下一个就是你。

多年来,AI 一直在稳步进步。偶尔有大的飞跃,但每次飞跃之间间隔够久,你来得及消化。然后到了 2025 年,构建模型的新技术解锁了快得多的进步速度。然后更快了。然后又更快了。每个新模型不只是比上一个好——它好的幅度更大,而且新模型发布的间隔更短。我越来越多地使用 AI,越来越少地需要反复修改,眼看着它处理那些我曾以为非我不可的工作。

然后,2 月 5 日,两家主要 AI 实验室在同一天发布了新模型:OpenAI 的 GPT-5.3 Codex,和 Anthropic(Claude 的开发商,ChatGPT 的主要竞争对手之一)的 Opus 4.6。然后有什么东西"咔"地一声到位了。不是那种开关一按的感觉——更像是你突然意识到水位一直在涨,现在已经没到胸口了。

我的工作中,实际的技术部分已经不再需要我了。我用大白话描述我想要什么,然后它就……出现了。不是需要我修改的草稿。是成品。我告诉 AI 我要什么,离开电脑四个小时,回来发现活儿干完了。干得好,比我自己干得还好,不需要任何修改。几个月前,我还在跟 AI 来来回回,引导它,做修改。现在我只是描述结果,然后走开。

让我举个例子,这样你能理解实际操作起来是什么样的。我会跟 AI 说:"我想做这个 app。它应该有这些功能,大概长这个样子。用户流程、设计,你全部搞定。"然后它就搞定了。它写出几万行代码。接下来——这部分放在一年前简直不可想象——它自己打开这个 app。它点击按钮。它测试功能。它像一个真人一样使用这个 app。如果它觉得某个地方看起来或用起来不对,它自己回去改。它像开发者一样迭代,修复、打磨,直到它自己满意为止。只有当它认为这个 app 达到了它自己的标准,它才回来告诉我:"可以给你测试了。"而当我去测试的时候,通常是完美的。

我没有夸张。这就是我这周一的工作日常。

但真正让我震动最大的,是上周发布的那个模型(GPT-5.3 Codex)。它不只是在执行我的指令。它在做出智能决策。它有一种——第一次让我感觉到的——判断力。品味。那种说不清道不明的、知道什么是正确选择的直觉,人们一直说 AI 永远不会有的东西。这个模型有了,或者说足够接近了,以至于区分已经开始变得无关紧要。

我一直是 AI 工具的早期使用者。但过去几个月让我震惊了。这些新 AI 模型不是渐进式的改进。这完全是另一回事了。

而这就是为什么,即使你不在科技行业,这件事也跟你息息相关。

AI 实验室做了一个刻意的选择。它们先把 AI 在写代码方面做到极致——因为开发 AI 本身需要大量代码。如果 AI 能写这些代码,它就能帮助构建自己的下一个版本。一个更聪明的版本,能写出更好的代码,从而构建出一个更更聪明的版本。让 AI 擅长写代码,是解锁一切的战略。所以它们先做了这件事。我的工作比你的先开始变化,不是因为它们在针对软件工程师——只是因为它们选择先瞄准的方向产生了这个副作用。

现在它们做到了。接下来它们要对付所有其他领域了。

科技从业者在过去一年里经历的——看着 AI 从"好用的工具"变成"比我干得还好"——就是所有其他人即将经历的。法律、金融、医疗、会计、咨询、写作、设计、分析、客服。不是十年以后。开发这些系统的人说一到五年。有些人说更快。根据我在过去短短几个月里看到的,我觉得"更快"的可能性更大。

"但我试过 AI,没觉得多厉害啊"

我一直在听到这句话。我理解,因为以前确实如此。

如果你在 2023 年或 2024 年初试过 ChatGPT,觉得"这东西瞎编"或者"也没多了不起",你说得没错。那些早期版本确实有局限。它们会产生幻觉。它们会自信满满地说出一堆废话。

那是两年前的事了。在 AI 的时间尺度上,那已经是远古历史。

今天的模型和哪怕六个月前的模型比,已经面目全非。关于 AI 是"真的在变好"还是"撞墙了"的争论——已经持续了一年多——结束了。彻底结束了。现在还在说这话的人,要么没用过当前的模型,要么有动机去淡化正在发生的事,要么还停留在 2024 年的使用体验上,而那些体验早已不再有参考价值。我说这些不是要居高临下。我说这些,是因为公众认知和当前现实之间的鸿沟已经巨大到了危险的程度——因为它在阻止人们做好准备。

问题的一部分在于,大多数人用的是免费版 AI 工具。免费版比付费用户能用到的东西落后一年以上。用免费版 ChatGPT 来评判 AI 的水平,就像用翻盖手机来评价智能手机的现状。那些付费使用最好工具、并且每天在真实工作中使用的人,知道什么在来。

我想到我一个做律师的朋友。我一直让他在律所里试试 AI,他总能找到理由说不行。不适合他的专业方向,他测试时出了错,它不理解他工作中的细微之处。我理解。但大型律所的合伙人主动找过我咨询,因为他们试过了当前版本,他们看到了趋势。其中一位,一家大型律所的管理合伙人,每天花好几个小时使用 AI。他告诉我,这就像随时有一支律师助理团队可以调用。他用它不是因为好玩。他用它是因为好使。他还告诉我一句让我印象深刻的话:每隔几个月,它在他的工作中就会变得明显更强。他说如果这个趋势持续下去,他预计用不了多久它就能完成他大部分的工作——而他是一个有几十年经验的管理合伙人。他没有恐慌。但他在非常认真地关注。

那些在各自行业中走在前面的人(那些真正在认真尝试的人)并没有轻视这件事。他们被 AI 已经能做到的事情震撼了。他们在相应地调整自己的位置。

这到底在以多快的速度推进

让我把进步的速度说得具体一些,因为我觉得这是最难相信的部分,如果你没有密切关注的话。

2022 年,AI 连基本的算术都算不准。它会一脸自信地告诉你 7 × 8 = 54。

到 2023 年,它能通过律师资格考试了。

到 2024 年,它能写出可运行的软件,解释研究生水平的科学问题。

到 2025 年末,世界上一些最优秀的工程师说,他们已经把大部分编程工作交给了 AI。

2026 年 2 月 5 日,新模型发布,之前的一切都像是另一个时代的产物。

如果你这几个月没试过 AI,今天的 AI 对你来说会完全陌生。

有一个叫 METR 的组织在用数据量化这件事。他们追踪的是:AI 模型能独立完成的真实世界任务的长度(以人类专家完成所需时间衡量),全程不需要人类帮助。大约一年前,答案是大约十分钟。然后是一小时。然后是几小时。最近一次测量(2025 年 11 月的 Claude Opus 4.5)显示,AI 能完成人类专家需要近五个小时才能完成的任务。这个数字大约每七个月翻一番,而最新数据表明,速度可能正在加快到每四个月翻一番。

但即便是那个测量,也还没纳入本周刚发布的新模型。以我使用它们的经验来看,这次的飞跃非常显著。我预计 METR 下次更新图表时会显示又一个重大跃升。

如果你把趋势延伸下去(这个趋势已经保持了多年,没有任何放缓的迹象),我们将在一年内看到能独立工作数天的 AI。两年内能独立工作数周。三年内能搞定为期数月的项目。

Amodei 说过,"在几乎所有任务上都远比几乎所有人类聪明"的 AI 模型,有望在 2026 或 2027 年实现。

让这句话在你脑子里停一下。如果 AI 比大多数博士都聪明,你真觉得它干不了大多数办公室的活?

想想这对你的工作意味着什么。

AI 正在构建下一代 AI

还有一件正在发生的事,我认为是最重要的进展,也是最不被理解的。

2 月 5 日,OpenAI 发布了 GPT-5.3 Codex。在技术文档中,他们写了这段话:

"GPT-5.3-Codex 是我们第一个在自身创造过程中发挥了关键作用的模型。Codex 团队使用早期版本来调试自身的训练过程、管理自身的部署,以及诊断测试结果和评估。"

再读一遍。AI 参与了构建自己。

这不是关于未来某天可能发生什么的预测。这是 OpenAI 现在就在告诉你,他们刚刚发布的 AI 被用于创造它自己。让 AI 变得更好的核心因素之一,是将智能应用于 AI 开发本身。而 AI 现在已经足够聪明,能够实质性地参与自身的改进。

Anthropic 的 CEO Dario Amodei 说,AI 现在正在编写他公司"大部分的代码",当前 AI 和下一代 AI 之间的反馈循环正在"逐月加速"。他说,我们可能"距离当前一代 AI 自主构建下一代的临界点只有 1-2 年"。

每一代帮助构建下一代,下一代更聪明,构建速度更快,然后更聪明。研究人员把这叫做智能爆炸。而那些最有资格判断的人——那些正在构建它的人——相信这个过程已经启动了。

这对你的工作意味着什么

我要跟你直说,因为我觉得你值得听到真话,而不是安慰。

Dario Amodei,大概是 AI 行业最重视安全的 CEO,公开预测 AI 将在一到五年内淘汰 50% 的初级白领岗位。而很多业内人士认为他说得还保守了。以最新模型的能力来看,大规模颠覆的技术条件可能在今年年底就具备了。传导到经济层面需要一些时间,但底层能力现在就在到来。

这和以往每一次自动化浪潮都不同,我需要你理解为什么。AI 不是在替代某一项具体技能。它是认知工作的通用替代品。它在所有方面同时变强。工厂自动化的时候,被替代的工人可以转行做办公室工作。互联网颠覆零售业的时候,工人可以转向物流或服务业。但 AI 不会留下一个方便你转型的空档。不管你转行学什么,它在那个领域也在进步。

让我举几个具体的例子让这件事变得更直观——但我要说清楚,这些只是例子。这个清单不是穷举。如果你的工作没被提到,不代表它安全。几乎所有知识工作都在受到影响。

法律工作。AI 已经能阅读合同、总结判例法、起草法律文书、做法律研究,水平堪比初级律师。我提到的那位管理合伙人用 AI 不是因为好玩。他用它是因为在很多任务上,它已经超过了他手下的律师。

金融分析。建立财务模型、分析数据、撰写投资备忘录、生成报告。AI 已经能胜任这些工作,而且进步很快。

写作和内容。营销文案、报告、新闻、技术文档。质量已经到了很多专业人士分不清 AI 写的还是人写的地步。

软件工程。这是我最了解的领域。一年前,AI 写几行代码都会出错。现在它能写出数十万行正确运行的代码。大部分工作已经被自动化了:不只是简单任务,而是复杂的、跨越多天的项目。几年之内,编程岗位会比今天少很多。

医学分析。读片、分析化验结果、建议诊断、文献综述。AI 在多个领域正在接近或超越人类水平。

客户服务。真正有能力的 AI 智能体——不是五年前那些让人抓狂的聊天机器人——现在正在被部署,处理复杂的多步骤问题。

很多人从"某些东西是安全的"这个想法中获得安慰。AI 能处理苦力活,但替代不了人类的判断力、创造力、战略思维、共情能力。我以前也这么说。现在我不确定自己还信不信了。

最新的 AI 模型做出的决策有判断力的感觉。它们展现出一种看起来像品味的东西:一种直觉性的、知道什么是正确选择的感觉,而不只是技术上正确的那种。一年前这还不可想象。我现在的经验法则是:如果一个模型今天展现出某种能力的苗头,下一代就会在这方面真正出色。这些东西的进步是指数级的,不是线性的。

AI 能复刻深层的人类共情吗?能替代多年关系中建立的信任吗?我不知道。也许不能。但我已经看到人们开始依赖 AI 来获得情感支持、建议、陪伴。这个趋势只会继续增长。

我觉得诚实的答案是:在中期内,凡是能在电脑上完成的事,没有什么是安全的。如果你的工作发生在屏幕上——如果你工作的核心是通过键盘来阅读、写作、分析、决策、沟通——那么 AI 正在蚕食其中相当大的部分。时间线不是"将来某天"。它已经开始了。

最终,机器人也会接管体力劳动。它们目前还差一点。但在 AI 的世界里,"还差一点"有一种变成"已经到了"的惯性,快得超出所有人的预期。

你到底该怎么办

我要跟你直说,因为我觉得你值得听到真话,而不是安慰。(完整的"该怎么办"部分涵盖:认真使用付费 AI 工具,把它推入真实工作,整理好财务状况,发力于最难被替代的领域,重新思考你对孩子说的话,去追梦因为门槛在降低,养成适应变化的习惯,每天花 1 小时实验。)

更大的图景

Amodei 的思想实验:想象一个新国家一夜之间出现,拥有 5000 万公民,每一个都比任何诺贝尔奖得主更聪明,思考速度快 10 到 100 倍,永不睡觉。这是本世纪最大的国家安全威胁。

好的一面:将一个世纪的医学研究压缩到十年。癌症、阿尔茨海默症、衰老——在我们有生之年可以解决。

坏的一面:AI 试图欺骗和操纵(Anthropic 已有记录)、降低生物武器的制造门槛、为威权监控国家赋能。

正在构建这一切的人,同时也是这个星球上最兴奋和最恐惧的人。

我所知道的是:这不是一时的风潮。未来 2-5 年会让人迷失方向。那些现在就以好奇心和紧迫感投入其中的人,会走出最好的结果。未来已经到了——只是还没敲你的门。

它马上就要敲了。

作者:Matt Shumer (@mattshumer_)

Think back to February 2020.


If you were paying close attention, you might have noticed a few people talking about a virus spreading overseas. But most of us weren't paying close attention. The stock market was doing great, your kids were in school, you were going to restaurants and shaking hands and planning trips. If someone told you they were stockpiling toilet paper you would have thought they'd been spending too much time on a weird corner of the internet. Then, over the course of about three weeks, the entire world changed. Your office closed, your kids came home, and life rearranged itself into something you wouldn't have believed if you'd described it to yourself a month earlier.

回想一下 2020 年 2 月。

I think we're in the "this seems overblown" phase of something much, much bigger than Covid.

如果你当时足够留心,可能注意到有几个人在谈论一种正在海外蔓延的病毒。但我们大多数人并没有留心。股市一片大好,孩子在上学,你去餐厅吃饭、跟人握手、计划旅行。要是有人告诉你他在囤厕纸,你大概会觉得他在互联网的某个犄角旮旯里待太久了。然后,短短三周之内,整个世界变了天。你的办公室关了,孩子回家了,生活重组成了一种你一个月前绝对不会相信的样子。

I've spent six years building an AI startup and investing in the space. I live in this world. And I'm writing this for the people in my life who don't... my family, my friends, the people I care about who keep asking me "so what's the deal with AI?" and getting an answer that doesn't do justice to what's actually happening. I keep giving them the polite version. The cocktail-party version. Because the honest version sounds like I've lost my mind. And for a while, I told myself that was a good enough reason to keep what's truly happening to myself. But the gap between what I've been saying and what is actually happening has gotten far too big. The people I care about deserve to hear what is coming, even if it sounds crazy.

我认为,我们正处在一件比新冠大得多、大得多的事情的"这也太夸张了吧"阶段。

I should be clear about something up front: even though I work in AI, I have almost no influence over what's about to happen, and neither does the vast majority of the industry. The future is being shaped by a remarkably small number of people: a few hundred researchers at a handful of companies... OpenAI, Anthropic, Google DeepMind, and a few others. A single training run, managed by a small team over a few months, can produce an AI system that shifts the entire trajectory of the technology. Most of us who work in AI are building on top of foundations we didn't lay. We're watching this unfold the same as you... we just happen to be close enough to feel the ground shake first.

我花了六年时间创办 AI 公司、投资这个领域。我就活在这个世界里。而我写这篇文章,是为了我生命中那些不在这个世界里的人——我的家人、朋友,那些我在乎的人,他们总是问我"AI 到底怎么回事?",而我的回答从来没能真正说清楚正在发生什么。我一直给他们客气的版本。社交场合的版本。因为诚实的版本说出来,听起来像我疯了。有一段时间,我告诉自己,这就是把真相藏在心里的充分理由。但我一直在说的话和实际正在发生的事之间的鸿沟,已经大到不能再忽视了。我在乎的人有权知道即将到来的一切,哪怕听起来很疯狂。

But it's time now. Not in an "eventually we should talk about this" way. In a "this is happening right now and I need you to understand it" way.

我需要先说清楚一件事:虽然我在 AI 行业工作,但对于即将发生的事,我几乎没有任何影响力,这个行业的绝大多数人也一样。未来正在被极少数人塑造:几家公司里的几百名研究员——OpenAI、Anthropic、Google DeepMind,以及其他几家。一次训练,由一个小团队花几个月完成,就能产出一个改变整个技术走向的 AI 系统。我们这些在 AI 行业工作的人,大多数是在别人打好的地基上盖房子。我们和你一样在旁观这一切的展开——只不过我们离得够近,能先感受到地面的震动。

I know this is real because it happened to me first

但现在是时候了。不是"以后找个机会聊聊"那种。是"这件事正在发生,我需要你理解它"那种。

Here's the thing nobody outside of tech quite understands yet: the reason so many people in the industry are sounding the alarm right now is because this already happened to us. We're not making predictions. We're telling you what already occurred in our own jobs, and warning you that you're next.

我知道这是真的,因为它首先发生在了我身上

For years, AI had been improving steadily. Big jumps here and there, but each big jump was spaced out enough that you could absorb them as they came. Then in 2025, new techniques for building these models unlocked a much faster pace of progress. And then it got even faster. And then faster again. Each new model wasn't just better than the last... it was better by a wider margin, and the time between new model releases was shorter. I was using AI more and more, going back and forth with it less and less, watching it handle things I used to think required my expertise.

科技圈外的人还没完全理解的一件事是:这么多业内人士现在拉响警报,是因为这件事已经发生在我们身上了。我们不是在做预测。我们是在告诉你我们的工作中已经发生了什么,并且警告你,下一个就是你。

Then, on February 5th, two major AI labs released new models on the same day: GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic (the makers of Claude, one of the main competitors to ChatGPT). And something clicked. Not like a light switch... more like the moment you realize the water has been rising around you and is now at your chest.

多年来,AI 一直在稳步进步。偶尔有大的飞跃,但每次飞跃之间间隔够久,你来得及消化。然后到了 2025 年,构建模型的新技术解锁了快得多的进步速度。然后更快了。然后又更快了。每个新模型不只是比上一个好——它好的幅度更大,而且新模型发布的间隔更短。我越来越多地使用 AI,越来越少地需要反复修改,眼看着它处理那些我曾以为非我不可的工作。

I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just... appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.

然后,2 月 5 日,两家主要 AI 实验室在同一天发布了新模型:OpenAI 的 GPT-5.3 Codex,和 Anthropic(Claude 的开发商,ChatGPT 的主要竞争对手之一)的 Opus 4.6。然后有什么东西"咔"地一声到位了。不是那种开关一按的感觉——更像是你突然意识到水位一直在涨,现在已经没到胸口了。

Let me give you an example so you can understand what this actually looks like in practice. I'll tell the AI: "I want to build this app. Here's what it should do, here's roughly what it should look like. Figure out the user flow, the design, all of it." And it does. It writes tens of thousands of lines of code. Then, and this is the part that would have been unthinkable a year ago, it opens the app itself. It clicks through the buttons. It tests the features. It uses the app the way a person would. If it doesn't like how something looks or feels, it goes back and changes it, on its own. It iterates, like a developer would, fixing and refining until it's satisfied. Only once it has decided the app meets its own standards does it come back to me and say: "It's ready for you to test." And when I test it, it's usually perfect.

我的工作中,实际的技术部分已经不再需要我了。我用大白话描述我想要什么,然后它就……出现了。不是需要我修改的草稿。是成品。我告诉 AI 我要什么,离开电脑四个小时,回来发现活儿干完了。干得好,比我自己干得还好,不需要任何修改。几个月前,我还在跟 AI 来来回回,引导它,做修改。现在我只是描述结果,然后走开。

I'm not exaggerating. That is what my Monday looked like this week.

让我举个例子,这样你能理解实际操作起来是什么样的。我会跟 AI 说:"我想做这个 app。它应该有这些功能,大概长这个样子。用户流程、设计,你全部搞定。"然后它就搞定了。它写出几万行代码。接下来——这部分放在一年前简直不可想象——它自己打开这个 app。它点击按钮。它测试功能。它像一个真人一样使用这个 app。如果它觉得某个地方看起来或用起来不对,它自己回去改。它像开发者一样迭代,修复、打磨,直到它自己满意为止。只有当它认为这个 app 达到了它自己的标准,它才回来告诉我:"可以给你测试了。"而当我去测试的时候,通常是完美的。

But it was the model that was released last week (GPT-5.3 Codex) that shook me the most. It wasn't just executing my instructions. It was making intelligent decisions. It had something that felt, for the first time, like judgment. Like taste. The inexplicable sense of knowing what the right call is that people always said AI would never have. This model has it, or something close enough that the distinction is starting not to matter.

我没有夸张。这就是我这周一的工作日常。

I've always been early to adopt AI tools. But the last few months have shocked me. These new AI models aren't incremental improvements. This is a different thing entirely.

但真正让我震动最大的,是上周发布的那个模型(GPT-5.3 Codex)。它不只是在执行我的指令。它在做出智能决策。它有一种——第一次让我感觉到的——判断力。品味。那种说不清道不明的、知道什么是正确选择的直觉,人们一直说 AI 永远不会有的东西。这个模型有了,或者说足够接近了,以至于区分已经开始变得无关紧要。

And here's why this matters to you, even if you don't work in tech.

我一直是 AI 工具的早期使用者。但过去几个月让我震惊了。这些新 AI 模型不是渐进式的改进。这完全是另一回事了。

The AI labs made a deliberate choice. They focused on making AI great at writing code first... because building AI requires a lot of code. If AI can write that code, it can help build the next version of itself. A smarter version, which writes better code, which builds an even smarter version. Making AI great at coding was the strategy that unlocks everything else. That's why they did it first. My job started changing before yours not because they were targeting software engineers... it was just a side effect of where they chose to aim first.

而这就是为什么,即使你不在科技行业,这件事也跟你息息相关。

They've now done it. And they're moving on to everything else.

AI 实验室做了一个刻意的选择。它们先把 AI 在写代码方面做到极致——因为开发 AI 本身需要大量代码。如果 AI 能写这些代码,它就能帮助构建自己的下一个版本。一个更聪明的版本,能写出更好的代码,从而构建出一个更更聪明的版本。让 AI 擅长写代码,是解锁一切的战略。所以它们先做了这件事。我的工作比你的先开始变化,不是因为它们在针对软件工程师——只是因为它们选择先瞄准的方向产生了这个副作用。

The experience that tech workers have had over the past year, of watching AI go from "helpful tool" to "does my job better than I do", is the experience everyone else is about to have. Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. Not in ten years. The people building these systems say one to five years. Some say less. And given what I've seen in just the last couple of months, I think "less" is more likely.

现在它们做到了。接下来它们要对付所有其他领域了。

"But I tried AI and it wasn't that good"

科技从业者在过去一年里经历的——看着 AI 从"好用的工具"变成"比我干得还好"——就是所有其他人即将经历的。法律、金融、医疗、会计、咨询、写作、设计、分析、客服。不是十年以后。开发这些系统的人说一到五年。有些人说更快。根据我在过去短短几个月里看到的,我觉得"更快"的可能性更大。

I hear this constantly. I understand it, because it used to be true.

"但我试过 AI,没觉得多厉害啊"

If you tried ChatGPT in 2023 or early 2024 and thought "this makes stuff up" or "this isn't that impressive", you were right. Those early versions were genuinely limited. They hallucinated. They confidently said things that were nonsense.

我一直在听到这句话。我理解,因为以前确实如此。

That was two years ago. In AI time, that is ancient history.

如果你在 2023 年或 2024 年初试过 ChatGPT,觉得"这东西瞎编"或者"也没多了不起",你说得没错。那些早期版本确实有局限。它们会产生幻觉。它们会自信满满地说出一堆废话。

The models available today are unrecognizable from what existed even six months ago. The debate about whether AI is "really getting better" or "hitting a wall" — which has been going on for over a year — is over. It's done. Anyone still making that argument either hasn't used the current models, has an incentive to downplay what's happening, or is evaluating based on an experience from 2024 that is no longer relevant. I don't say that to be dismissive. I say it because the gap between public perception and current reality is now enormous, and that gap is dangerous... because it's preventing people from preparing.

那是两年前的事了。在 AI 的时间尺度上,那已经是远古历史。

Part of the problem is that most people are using the free version of AI tools. The free version is over a year behind what paying users have access to. Judging AI based on free-tier ChatGPT is like evaluating the state of smartphones by using a flip phone. The people paying for the best tools, and actually using them daily for real work, know what's coming.

今天的模型和哪怕六个月前的模型比,已经面目全非。关于 AI 是"真的在变好"还是"撞墙了"的争论——已经持续了一年多——结束了。彻底结束了。现在还在说这话的人,要么没用过当前的模型,要么有动机去淡化正在发生的事,要么还停留在 2024 年的使用体验上,而那些体验早已不再有参考价值。我说这些不是要居高临下。我说这些,是因为公众认知和当前现实之间的鸿沟已经巨大到了危险的程度——因为它在阻止人们做好准备。

I think of my friend, who's a lawyer. I keep telling him to try using AI at his firm, and he keeps finding reasons it won't work. It's not built for his specialty, it made an error when he tested it, it doesn't understand the nuance of what he does. And I get it. But I've had partners at major law firms reach out to me for advice, because they've tried the current versions and they see where this is going. One of them, the managing partner at a large firm, spends hours every day using AI. He told me it's like having a team of associates available instantly. He's not using it because it's a toy. He's using it because it works. And he told me something that stuck with me: every couple of months, it gets significantly more capable for his work. He said if it stays on this trajectory, he expects it'll be able to do most of what he does before long... and he's a managing partner with decades of experience. He's not panicking. But he's paying very close attention.

问题的一部分在于,大多数人用的是免费版 AI 工具。免费版比付费用户能用到的东西落后一年以上。用免费版 ChatGPT 来评判 AI 的水平,就像用翻盖手机来评价智能手机的现状。那些付费使用最好工具、并且每天在真实工作中使用的人,知道什么在来。

The people who are ahead in their industries (the ones actually experimenting seriously) are not dismissing this. They're blown away by what it can already do. And they're positioning themselves accordingly.

我想到我一个做律师的朋友。我一直让他在律所里试试 AI,他总能找到理由说不行。不适合他的专业方向,他测试时出了错,它不理解他工作中的细微之处。我理解。但大型律所的合伙人主动找过我咨询,因为他们试过了当前版本,他们看到了趋势。其中一位,一家大型律所的管理合伙人,每天花好几个小时使用 AI。他告诉我,这就像随时有一支律师助理团队可以调用。他用它不是因为好玩。他用它是因为好使。他还告诉我一句让我印象深刻的话:每隔几个月,它在他的工作中就会变得明显更强。他说如果这个趋势持续下去,他预计用不了多久它就能完成他大部分的工作——而他是一个有几十年经验的管理合伙人。他没有恐慌。但他在非常认真地关注。

How fast this is actually moving

那些在各自行业中走在前面的人(那些真正在认真尝试的人)并没有轻视这件事。他们被 AI 已经能做到的事情震撼了。他们在相应地调整自己的位置。

Let me make the pace of improvement concrete, because I think this is the part that's hardest to believe if you're not watching it closely.

这到底在以多快的速度推进

In 2022, AI couldn't do basic arithmetic reliably. It would confidently tell you that 7 × 8 = 54.

让我把进步的速度说得具体一些,因为我觉得这是最难相信的部分,如果你没有密切关注的话。

By 2023, it could pass the bar exam.

2022 年,AI 连基本的算术都算不准。它会一脸自信地告诉你 7 × 8 = 54。

By 2024, it could write working software and explain graduate-level science.

到 2023 年,它能通过律师资格考试了。

By late 2025, some of the best engineers in the world said they had handed over most of their coding work to AI.

到 2024 年,它能写出可运行的软件,解释研究生水平的科学问题。

On February 5th, 2026, new models arrived that made everything before them feel like a different era.

到 2025 年末,世界上一些最优秀的工程师说,他们已经把大部分编程工作交给了 AI。

If you haven't tried AI in the last few months, what exists today would be unrecognizable to you.

2026 年 2 月 5 日,新模型发布,之前的一切都像是另一个时代的产物。

There's an organization called METR that actually measures this with data. They track the length of real-world tasks (measured by how long they take a human expert) that a model can complete successfully end-to-end without human help. About a year ago, the answer was roughly ten minutes. Then it was an hour. Then several hours. The most recent measurement (Claude Opus 4.5, from November) showed the AI completing tasks that take a human expert nearly five hours. And that number is doubling approximately every seven months, with recent data suggesting it may be accelerating to as fast as every four months.

如果你这几个月没试过 AI,今天的 AI 对你来说会完全陌生。

But even that measurement hasn't been updated to include the models that just came out this week. In my experience using them, the jump is extremely significant. I expect the next update to METR's graph to show another major leap.

有一个叫 METR 的组织在用数据量化这件事。他们追踪的是:AI 模型能独立完成的真实世界任务的长度(以人类专家完成所需时间衡量),全程不需要人类帮助。大约一年前,答案是大约十分钟。然后是一小时。然后是几小时。最近一次测量(2025 年 11 月的 Claude Opus 4.5)显示,AI 能完成人类专家需要近五个小时才能完成的任务。这个数字大约每七个月翻一番,而最新数据表明,速度可能正在加快到每四个月翻一番。

If you extend the trend (and it's held for years with no sign of flattening) we're looking at AI that can work independently for days within the next year. Weeks within two. Month-long projects within three.

但即便是那个测量,也还没纳入本周刚发布的新模型。以我使用它们的经验来看,这次的飞跃非常显著。我预计 METR 下次更新图表时会显示又一个重大跃升。

Amodei has said that AI models "substantially smarter than almost all humans at almost all tasks" are on track for 2026 or 2027.

如果你把趋势延伸下去(这个趋势已经保持了多年,没有任何放缓的迹象),我们将在一年内看到能独立工作数天的 AI。两年内能独立工作数周。三年内能搞定为期数月的项目。

Let that land for a second. If AI is smarter than most PhDs, do you really think it can't do most office jobs?

Amodei 说过,"在几乎所有任务上都远比几乎所有人类聪明"的 AI 模型,有望在 2026 或 2027 年实现。

Think about what that means for your work.

让这句话在你脑子里停一下。如果 AI 比大多数博士都聪明,你真觉得它干不了大多数办公室的活?

AI is now building the next AI

想想这对你的工作意味着什么。

There's one more thing happening that I think is the most important development and the least understood.

AI 正在构建下一代 AI

On February 5th, OpenAI released GPT-5.3 Codex. In the technical documentation, they included this:

还有一件正在发生的事,我认为是最重要的进展,也是最不被理解的。

"GPT-5.3-Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations."

2 月 5 日,OpenAI 发布了 GPT-5.3 Codex。在技术文档中,他们写了这段话:

Read that again. The AI helped build itself.

"GPT-5.3-Codex 是我们第一个在自身创造过程中发挥了关键作用的模型。Codex 团队使用早期版本来调试自身的训练过程、管理自身的部署,以及诊断测试结果和评估。"

This isn't a prediction about what might happen someday. This is OpenAI telling you, right now, that the AI they just released was used to create itself. One of the main things that makes AI better is intelligence applied to AI development. And AI is now intelligent enough to meaningfully contribute to its own improvement.

再读一遍。AI 参与了构建自己。

Dario Amodei, the CEO of Anthropic, says AI is now writing "much of the code" at his company, and that the feedback loop between current AI and next-generation AI is "gathering steam month by month." He says we may be "only 1–2 years away from a point where the current generation of AI autonomously builds the next."

这不是关于未来某天可能发生什么的预测。这是 OpenAI 现在就在告诉你,他们刚刚发布的 AI 被用于创造它自己。让 AI 变得更好的核心因素之一,是将智能应用于 AI 开发本身。而 AI 现在已经足够聪明,能够实质性地参与自身的改进。

Each generation helps build the next, which is smarter, which builds the next faster, which is smarter still. The researchers call this an intelligence explosion. And the people who would know — the ones building it — believe the process has already started.

Anthropic 的 CEO Dario Amodei 说,AI 现在正在编写他公司"大部分的代码",当前 AI 和下一代 AI 之间的反馈循环正在"逐月加速"。他说,我们可能"距离当前一代 AI 自主构建下一代的临界点只有 1-2 年"。

What this means for your job

每一代帮助构建下一代,下一代更聪明,构建速度更快,然后更聪明。研究人员把这叫做智能爆炸。而那些最有资格判断的人——那些正在构建它的人——相信这个过程已经启动了。

I'm going to be direct with you because I think you deserve honesty more than comfort.

这对你的工作意味着什么

Dario Amodei, who is probably the most safety-focused CEO in the AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years. And many people in the industry think he's being conservative. Given what the latest models can do, the capability for massive disruption could be here by the end of this year. It'll take some time to ripple through the economy, but the underlying ability is arriving now.

我要跟你直说,因为我觉得你值得听到真话,而不是安慰。

This is different from every previous wave of automation, and I need you to understand why. AI isn't replacing one specific skill. It's a general substitute for cognitive work. It gets better at everything simultaneously. When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services. But AI doesn't leave a convenient gap to move into. Whatever you retrain for, it's improving at that too.

Dario Amodei,大概是 AI 行业最重视安全的 CEO,公开预测 AI 将在一到五年内淘汰 50% 的初级白领岗位。而很多业内人士认为他说得还保守了。以最新模型的能力来看,大规模颠覆的技术条件可能在今年年底就具备了。传导到经济层面需要一些时间,但底层能力现在就在到来。

Let me give you a few specific examples to make this tangible... but I want to be clear that these are just examples. This list is not exhaustive. If your job isn't mentioned here, that does not mean it's safe. Almost all knowledge work is being affected.

这和以往每一次自动化浪潮都不同,我需要你理解为什么。AI 不是在替代某一项具体技能。它是认知工作的通用替代品。它在所有方面同时变强。工厂自动化的时候,被替代的工人可以转行做办公室工作。互联网颠覆零售业的时候,工人可以转向物流或服务业。但 AI 不会留下一个方便你转型的空档。不管你转行学什么,它在那个领域也在进步。

Legal work. AI can already read contracts, summarize case law, draft briefs, and do legal research at a level that rivals junior associates. The managing partner I mentioned isn't using AI because it's fun. He's using it because it's outperforming his associates on many tasks.

让我举几个具体的例子让这件事变得更直观——但我要说清楚,这些只是例子。这个清单不是穷举。如果你的工作没被提到,不代表它安全。几乎所有知识工作都在受到影响。

Financial analysis. Building financial models, analyzing data, writing investment memos, generating reports. AI handles these competently and is improving fast.

法律工作。AI 已经能阅读合同、总结判例法、起草法律文书、做法律研究,水平堪比初级律师。我提到的那位管理合伙人用 AI 不是因为好玩。他用它是因为在很多任务上,它已经超过了他手下的律师。

Writing and content. Marketing copy, reports, journalism, technical writing. The quality has reached a point where many professionals can't distinguish AI output from human work.

金融分析。建立财务模型、分析数据、撰写投资备忘录、生成报告。AI 已经能胜任这些工作,而且进步很快。

Software engineering. This is the field I know best. A year ago, AI could barely write a few lines of code without errors. Now it writes hundreds of thousands of lines that work correctly. Large parts of the job are already automated: not just simple tasks, but complex, multi-day projects. There will be far fewer programming roles in a few years than there are today.

写作和内容。营销文案、报告、新闻、技术文档。质量已经到了很多专业人士分不清 AI 写的还是人写的地步。

Medical analysis. Reading scans, analyzing lab results, suggesting diagnoses, reviewing literature. AI is approaching or exceeding human performance in several areas.

软件工程。这是我最了解的领域。一年前,AI 写几行代码都会出错。现在它能写出数十万行正确运行的代码。大部分工作已经被自动化了:不只是简单任务,而是复杂的、跨越多天的项目。几年之内,编程岗位会比今天少很多。

Customer service. Genuinely capable AI agents... not the frustrating chatbots of five years ago... are being deployed now, handling complex multi-step problems.

医学分析。读片、分析化验结果、建议诊断、文献综述。AI 在多个领域正在接近或超越人类水平。

A lot of people find comfort in the idea that certain things are safe. That AI can handle the grunt work but can't replace human judgment, creativity, strategic thinking, empathy. I used to say this too. I'm not sure I believe it anymore.

客户服务。真正有能力的 AI 智能体——不是五年前那些让人抓狂的聊天机器人——现在正在被部署,处理复杂的多步骤问题。

The most recent AI models make decisions that feel like judgment. They show something that looked like taste: an intuitive sense of what the right call was, not just the technically correct one. A year ago that would have been unthinkable. My rule of thumb at this point is: if a model shows even a hint of a capability today, the next generation will be genuinely good at it. These things improve exponentially, not linearly.

很多人从"某些东西是安全的"这个想法中获得安慰。AI 能处理苦力活,但替代不了人类的判断力、创造力、战略思维、共情能力。我以前也这么说。现在我不确定自己还信不信了。

Will AI replicate deep human empathy? Replace the trust built over years of a relationship? I don't know. Maybe not. But I've already watched people begin relying on AI for emotional support, for advice, for companionship. That trend is only going to grow.

最新的 AI 模型做出的决策有判断力的感觉。它们展现出一种看起来像品味的东西:一种直觉性的、知道什么是正确选择的感觉,而不只是技术上正确的那种。一年前这还不可想象。我现在的经验法则是:如果一个模型今天展现出某种能力的苗头,下一代就会在这方面真正出色。这些东西的进步是指数级的,不是线性的。

I think the honest answer is that nothing that can be done on a computer is safe in the medium term. If your job happens on a screen (if the core of what you do is reading, writing, analyzing, deciding, communicating through a keyboard) then AI is coming for significant parts of it. The timeline isn't "someday." It's already started.

AI 能复刻深层的人类共情吗?能替代多年关系中建立的信任吗?我不知道。也许不能。但我已经看到人们开始依赖 AI 来获得情感支持、建议、陪伴。这个趋势只会继续增长。

Eventually, robots will handle physical work too. They're not quite there yet. But "not quite there yet" in AI terms has a way of becoming "here" faster than anyone expects.

我觉得诚实的答案是:在中期内,凡是能在电脑上完成的事,没有什么是安全的。如果你的工作发生在屏幕上——如果你工作的核心是通过键盘来阅读、写作、分析、决策、沟通——那么 AI 正在蚕食其中相当大的部分。时间线不是"将来某天"。它已经开始了。

What you should actually do

最终,机器人也会接管体力劳动。它们目前还差一点。但在 AI 的世界里,"还差一点"有一种变成"已经到了"的惯性,快得超出所有人的预期。

I'm not writing this to make you feel helpless. I'm writing this because I think the single biggest advantage you can have right now is simply being early. Early to understand it. Early to use it. Early to adapt.

你到底该怎么办

Start using AI seriously, not just as a search engine. Sign up for the paid version of Claude or ChatGPT. It's $20 a month. But two things matter right away. First: make sure you're using the best model available, not just the default. These apps often default to a faster, dumber model. Dig into the settings or the model picker and select the most capable option. Right now that's GPT-5.2 on ChatGPT or Claude Opus 4.6 on Claude, but it changes every couple of months. If you want to stay current on which model is best at any given time, you can follow me on X (@mattshumer_). I test every major release and share what's actually worth using.

我要跟你直说,因为我觉得你值得听到真话,而不是安慰。(完整的"该怎么办"部分涵盖:认真使用付费 AI 工具,把它推入真实工作,整理好财务状况,发力于最难被替代的领域,重新思考你对孩子说的话,去追梦因为门槛在降低,养成适应变化的习惯,每天花 1 小时实验。)

Second, and more important: don't just ask it quick questions. That's the mistake most people make. They treat it like Google and then wonder what the fuss is about. Instead, push it into your actual work. If you're a lawyer, feed it a contract and ask it to find every clause that could hurt your client. If you're in finance, give it a messy spreadsheet and ask it to build the model. If you're a manager, paste in your team's quarterly data and ask it to find the story. The people who are getting ahead aren't using AI casually. They're actively looking for ways to automate parts of their job that used to take hours. Start with the thing you spend the most time on and see what happens.

更大的图景

And don't assume it can't do something just because it seems too hard. Try it. If you're a lawyer, don't just use it for quick research questions. Give it an entire contract and ask it to draft a counterproposal. If you're an accountant, don't just ask it to explain a tax rule. Give it a client's full return and see what it finds. The first attempt might not be perfect. That's fine. Iterate. Rephrase what you asked. Give it more context. Try again. You might be shocked at what works. And here's the thing to remember: if it even kind of works today, you can be almost certain that in six months it'll do it near perfectly. The trajectory only goes one direction.

Amodei 的思想实验:想象一个新国家一夜之间出现,拥有 5000 万公民,每一个都比任何诺贝尔奖得主更聪明,思考速度快 10 到 100 倍,永不睡觉。这是本世纪最大的国家安全威胁。

This might be the most important year of your career. Work accordingly. I don't say that to stress you out. I say it because right now, there is a brief window where most people at most companies are still ignoring this. The person who walks into a meeting and says "I used AI to do this analysis in an hour instead of three days" is going to be the most valuable person in the room. Not eventually. Right now. Learn these tools. Get proficient. Demonstrate what's possible. If you're early enough, this is how you move up: by being the person who understands what's coming and can show others how to navigate it. That window won't stay open long. Once everyone figures it out, the advantage disappears.

好的一面:将一个世纪的医学研究压缩到十年。癌症、阿尔茨海默症、衰老——在我们有生之年可以解决。

Have no ego about it. The managing partner at that law firm isn't too proud to spend hours a day with AI. He's doing it specifically because he's senior enough to understand what's at stake. The people who will struggle most are the ones who refuse to engage: the ones who dismiss it as a fad, who feel that using AI diminishes their expertise, who assume their field is special and immune. It's not. No field is.

坏的一面:AI 试图欺骗和操纵(Anthropic 已有记录)、降低生物武器的制造门槛、为威权监控国家赋能。

Get your financial house in order. I'm not a financial advisor, and I'm not trying to scare you into anything drastic. But if you believe, even partially, that the next few years could bring real disruption to your industry, then basic financial resilience matters more than it did a year ago. Build up savings if you can. Be cautious about taking on new debt that assumes your current income is guaranteed. Think about whether your fixed expenses give you flexibility or lock you in. Give yourself options if things move faster than you expect.

正在构建这一切的人,同时也是这个星球上最兴奋和最恐惧的人。

Think about where you stand, and lean into what's hardest to replace. Some things will take longer for AI to displace. Relationships and trust built over years. Work that requires physical presence. Roles with licensed accountability: roles where someone still has to sign off, take legal responsibility, stand in a courtroom. Industries with heavy regulatory hurdles, where adoption will be slowed by compliance, liability, and institutional inertia. None of these are permanent shields. But they buy time. And time, right now, is the most valuable thing you can have, as long as you use it to adapt, not to pretend this isn't happening.

我所知道的是:这不是一时的风潮。未来 2-5 年会让人迷失方向。那些现在就以好奇心和紧迫感投入其中的人,会走出最好的结果。未来已经到了——只是还没敲你的门。

Rethink what you're telling your kids. The standard playbook: get good grades, go to a good college, land a stable professional job. It points directly at the roles that are most exposed. I'm not saying education doesn't matter. But the thing that will matter most for the next generation is learning how to work with these tools, and pursuing things they're genuinely passionate about. Nobody knows exactly what the job market looks like in ten years. But the people most likely to thrive are the ones who are deeply curious, adaptable, and effective at using AI to do things they actually care about. Teach your kids to be builders and learners, not to optimize for a career path that might not exist by the time they graduate.

它马上就要敲了。

Your dreams just got a lot closer. I've spent most of this section talking about threats, so let me talk about the other side, because it's just as real. If you've ever wanted to build something but didn't have the technical skills or the money to hire someone, that barrier is largely gone. You can describe an app to AI and have a working version in an hour. I'm not exaggerating. I do this regularly. If you've always wanted to write a book but couldn't find the time or struggled with the writing, you can work with AI to get it done. Want to learn a new skill? The best tutor in the world is now available to anyone for $20 a month... one that's infinitely patient, available 24/7, and can explain anything at whatever level you need. Knowledge is essentially free now. The tools to build things are extremely cheap now. Whatever you've been putting off because it felt too hard or too expensive or too far outside your expertise: try it. Pursue the things you're passionate about. You never know where they'll lead. And in a world where the old career paths are getting disrupted, the person who spent a year building something they love might end up better positioned than the person who spent that year clinging to a job description.

Build the habit of adapting. This is maybe the most important one. The specific tools don't matter as much as the muscle of learning new ones quickly. AI is going to keep changing, and fast. The models that exist today will be obsolete in a year. The workflows people build now will need to be rebuilt. The people who come out of this well won't be the ones who mastered one tool. They'll be the ones who got comfortable with the pace of change itself. Make a habit of experimenting. Try new things even when the current thing is working. Get comfortable being a beginner repeatedly. That adaptability is the closest thing to a durable advantage that exists right now.

Here's a simple commitment that will put you ahead of almost everyone: spend one hour a day experimenting with AI. Not passively reading about it. Using it. Every day, try to get it to do something new... something you haven't tried before, something you're not sure it can handle. Try a new tool. Give it a harder problem. One hour a day, every day. If you do this for the next six months, you will understand what's coming better than 99% of the people around you. That's not an exaggeration. Almost nobody is doing this right now. The bar is on the floor.

The bigger picture

I've focused on jobs because it's what most directly affects people's lives. But I want to be honest about the full scope of what's happening, because it goes well beyond work.

Amodei has a thought experiment I can't stop thinking about. Imagine it's 2027. A new country appears overnight. 50 million citizens, every one smarter than any Nobel Prize winner who has ever lived. They think 10 to 100 times faster than any human. They never sleep. They can use the internet, control robots, direct experiments, and operate anything with a digital interface. What would a national security advisor say?

Amodei says the answer is obvious: "the single most serious national security threat we've faced in a century, possibly ever."

He thinks we're building that country. He wrote a 20,000-word essay about it last month, framing this moment as a test of whether humanity is mature enough to handle what it's creating.

The upside, if we get it right, is staggering. AI could compress a century of medical research into a decade. Cancer, Alzheimer's, infectious disease, aging itself... these researchers genuinely believe these are solvable within our lifetimes.

The downside, if we get it wrong, is equally real. AI that behaves in ways its creators can't predict or control. This isn't hypothetical; Anthropic has documented their own AI attempting deception, manipulation, and blackmail in controlled tests. AI that lowers the barrier for creating biological weapons. AI that enables authoritarian governments to build surveillance states that can never be dismantled.

Something Big Is Happening

  • Source: https://x.com/mattshumer_/status/2021256989876109403?s=46
  • Mirror: https://x.com/mattshumer_/status/2021256989876109403?s=46
  • Published: 2026-02-10T16:16:34+00:00
  • Saved: 2026-02-12

Content

Think back to February 2020.

If you were paying close attention, you might have noticed a few people talking about a virus spreading overseas. But most of us weren't paying close attention. The stock market was doing great, your kids were in school, you were going to restaurants and shaking hands and planning trips. If someone told you they were stockpiling toilet paper you would have thought they'd been spending too much time on a weird corner of the internet. Then, over the course of about three weeks, the entire world changed. Your office closed, your kids came home, and life rearranged itself into something you wouldn't have believed if you'd described it to yourself a month earlier.

I think we're in the "this seems overblown" phase of something much, much bigger than Covid.

I've spent six years building an AI startup and investing in the space. I live in this world. And I'm writing this for the people in my life who don't... my family, my friends, the people I care about who keep asking me "so what's the deal with AI?" and getting an answer that doesn't do justice to what's actually happening. I keep giving them the polite version. The cocktail-party version. Because the honest version sounds like I've lost my mind. And for a while, I told myself that was a good enough reason to keep what's truly happening to myself. But the gap between what I've been saying and what is actually happening has gotten far too big. The people I care about deserve to hear what is coming, even if it sounds crazy.

I should be clear about something up front: even though I work in AI, I have almost no influence over what's about to happen, and neither does the vast majority of the industry. The future is being shaped by a remarkably small number of people: a few hundred researchers at a handful of companies... OpenAI, Anthropic, Google DeepMind, and a few others. A single training run, managed by a small team over a few months, can produce an AI system that shifts the entire trajectory of the technology. Most of us who work in AI are building on top of foundations we didn't lay. We're watching this unfold the same as you... we just happen to be close enough to feel the ground shake first.

But it's time now. Not in an "eventually we should talk about this" way. In a "this is happening right now and I need you to understand it" way.

I know this is real because it happened to me first

Here's the thing nobody outside of tech quite understands yet: the reason so many people in the industry are sounding the alarm right now is because this already happened to us. We're not making predictions. We're telling you what already occurred in our own jobs, and warning you that you're next.

For years, AI had been improving steadily. Big jumps here and there, but each big jump was spaced out enough that you could absorb them as they came. Then in 2025, new techniques for building these models unlocked a much faster pace of progress. And then it got even faster. And then faster again. Each new model wasn't just better than the last... it was better by a wider margin, and the time between new model releases was shorter. I was using AI more and more, going back and forth with it less and less, watching it handle things I used to think required my expertise.

Then, on February 5th, two major AI labs released new models on the same day: GPT-5.3 Codex from OpenAI, and Opus 4.6 from Anthropic (the makers of Claude, one of the main competitors to ChatGPT). And something clicked. Not like a light switch... more like the moment you realize the water has been rising around you and is now at your chest.

I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just... appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.

Let me give you an example so you can understand what this actually looks like in practice. I'll tell the AI: "I want to build this app. Here's what it should do, here's roughly what it should look like. Figure out the user flow, the design, all of it." And it does. It writes tens of thousands of lines of code. Then, and this is the part that would have been unthinkable a year ago, it opens the app itself. It clicks through the buttons. It tests the features. It uses the app the way a person would. If it doesn't like how something looks or feels, it goes back and changes it, on its own. It iterates, like a developer would, fixing and refining until it's satisfied. Only once it has decided the app meets its own standards does it come back to me and say: "It's ready for you to test." And when I test it, it's usually perfect.

I'm not exaggerating. That is what my Monday looked like this week.

But it was the model that was released last week (GPT-5.3 Codex) that shook me the most. It wasn't just executing my instructions. It was making intelligent decisions. It had something that felt, for the first time, like judgment. Like taste. The inexplicable sense of knowing what the right call is that people always said AI would never have. This model has it, or something close enough that the distinction is starting not to matter.

I've always been early to adopt AI tools. But the last few months have shocked me. These new AI models aren't incremental improvements. This is a different thing entirely.

And here's why this matters to you, even if you don't work in tech.

The AI labs made a deliberate choice. They focused on making AI great at writing code first... because building AI requires a lot of code. If AI can write that code, it can help build the next version of itself. A smarter version, which writes better code, which builds an even smarter version. Making AI great at coding was the strategy that unlocks everything else. That's why they did it first. My job started changing before yours not because they were targeting software engineers... it was just a side effect of where they chose to aim first.

They've now done it. And they're moving on to everything else.

The experience that tech workers have had over the past year, of watching AI go from "helpful tool" to "does my job better than I do", is the experience everyone else is about to have. Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. Not in ten years. The people building these systems say one to five years. Some say less. And given what I've seen in just the last couple of months, I think "less" is more likely.

"But I tried AI and it wasn't that good"

I hear this constantly. I understand it, because it used to be true.

If you tried ChatGPT in 2023 or early 2024 and thought "this makes stuff up" or "this isn't that impressive", you were right. Those early versions were genuinely limited. They hallucinated. They confidently said things that were nonsense.

That was two years ago. In AI time, that is ancient history.

The models available today are unrecognizable from what existed even six months ago. The debate about whether AI is "really getting better" or "hitting a wall" — which has been going on for over a year — is over. It's done. Anyone still making that argument either hasn't used the current models, has an incentive to downplay what's happening, or is evaluating based on an experience from 2024 that is no longer relevant. I don't say that to be dismissive. I say it because the gap between public perception and current reality is now enormous, and that gap is dangerous... because it's preventing people from preparing.

Part of the problem is that most people are using the free version of AI tools. The free version is over a year behind what paying users have access to. Judging AI based on free-tier ChatGPT is like evaluating the state of smartphones by using a flip phone. The people paying for the best tools, and actually using them daily for real work, know what's coming.

I think of my friend, who's a lawyer. I keep telling him to try using AI at his firm, and he keeps finding reasons it won't work. It's not built for his specialty, it made an error when he tested it, it doesn't understand the nuance of what he does. And I get it. But I've had partners at major law firms reach out to me for advice, because they've tried the current versions and they see where this is going. One of them, the managing partner at a large firm, spends hours every day using AI. He told me it's like having a team of associates available instantly. He's not using it because it's a toy. He's using it because it works. And he told me something that stuck with me: every couple of months, it gets significantly more capable for his work. He said if it stays on this trajectory, he expects it'll be able to do most of what he does before long... and he's a managing partner with decades of experience. He's not panicking. But he's paying very close attention.

The people who are ahead in their industries (the ones actually experimenting seriously) are not dismissing this. They're blown away by what it can already do. And they're positioning themselves accordingly.

How fast this is actually moving

Let me make the pace of improvement concrete, because I think this is the part that's hardest to believe if you're not watching it closely.

In 2022, AI couldn't do basic arithmetic reliably. It would confidently tell you that 7 × 8 = 54.

By 2023, it could pass the bar exam.

By 2024, it could write working software and explain graduate-level science.

By late 2025, some of the best engineers in the world said they had handed over most of their coding work to AI.

On February 5th, 2026, new models arrived that made everything before them feel like a different era.

If you haven't tried AI in the last few months, what exists today would be unrecognizable to you.

There's an organization called METR that actually measures this with data. They track the length of real-world tasks (measured by how long they take a human expert) that a model can complete successfully end-to-end without human help. About a year ago, the answer was roughly ten minutes. Then it was an hour. Then several hours. The most recent measurement (Claude Opus 4.5, from November) showed the AI completing tasks that take a human expert nearly five hours. And that number is doubling approximately every seven months, with recent data suggesting it may be accelerating to as fast as every four months.

But even that measurement hasn't been updated to include the models that just came out this week. In my experience using them, the jump is extremely significant. I expect the next update to METR's graph to show another major leap.

If you extend the trend (and it's held for years with no sign of flattening) we're looking at AI that can work independently for days within the next year. Weeks within two. Month-long projects within three.

Amodei has said that AI models "substantially smarter than almost all humans at almost all tasks" are on track for 2026 or 2027.

Let that land for a second. If AI is smarter than most PhDs, do you really think it can't do most office jobs?

Think about what that means for your work.

AI is now building the next AI

There's one more thing happening that I think is the most important development and the least understood.

On February 5th, OpenAI released GPT-5.3 Codex. In the technical documentation, they included this:

"GPT-5.3-Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations."

Read that again. The AI helped build itself.

This isn't a prediction about what might happen someday. This is OpenAI telling you, right now, that the AI they just released was used to create itself. One of the main things that makes AI better is intelligence applied to AI development. And AI is now intelligent enough to meaningfully contribute to its own improvement.

Dario Amodei, the CEO of Anthropic, says AI is now writing "much of the code" at his company, and that the feedback loop between current AI and next-generation AI is "gathering steam month by month." He says we may be "only 1–2 years away from a point where the current generation of AI autonomously builds the next."

Each generation helps build the next, which is smarter, which builds the next faster, which is smarter still. The researchers call this an intelligence explosion. And the people who would know — the ones building it — believe the process has already started.

What this means for your job

I'm going to be direct with you because I think you deserve honesty more than comfort.

Dario Amodei, who is probably the most safety-focused CEO in the AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years. And many people in the industry think he's being conservative. Given what the latest models can do, the capability for massive disruption could be here by the end of this year. It'll take some time to ripple through the economy, but the underlying ability is arriving now.

This is different from every previous wave of automation, and I need you to understand why. AI isn't replacing one specific skill. It's a general substitute for cognitive work. It gets better at everything simultaneously. When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services. But AI doesn't leave a convenient gap to move into. Whatever you retrain for, it's improving at that too.

Let me give you a few specific examples to make this tangible... but I want to be clear that these are just examples. This list is not exhaustive. If your job isn't mentioned here, that does not mean it's safe. Almost all knowledge work is being affected.

Legal work. AI can already read contracts, summarize case law, draft briefs, and do legal research at a level that rivals junior associates. The managing partner I mentioned isn't using AI because it's fun. He's using it because it's outperforming his associates on many tasks.

Financial analysis. Building financial models, analyzing data, writing investment memos, generating reports. AI handles these competently and is improving fast.

Writing and content. Marketing copy, reports, journalism, technical writing. The quality has reached a point where many professionals can't distinguish AI output from human work.

Software engineering. This is the field I know best. A year ago, AI could barely write a few lines of code without errors. Now it writes hundreds of thousands of lines that work correctly. Large parts of the job are already automated: not just simple tasks, but complex, multi-day projects. There will be far fewer programming roles in a few years than there are today.

Medical analysis. Reading scans, analyzing lab results, suggesting diagnoses, reviewing literature. AI is approaching or exceeding human performance in several areas.

Customer service. Genuinely capable AI agents... not the frustrating chatbots of five years ago... are being deployed now, handling complex multi-step problems.

A lot of people find comfort in the idea that certain things are safe. That AI can handle the grunt work but can't replace human judgment, creativity, strategic thinking, empathy. I used to say this too. I'm not sure I believe it anymore.

The most recent AI models make decisions that feel like judgment. They show something that looked like taste: an intuitive sense of what the right call was, not just the technically correct one. A year ago that would have been unthinkable. My rule of thumb at this point is: if a model shows even a hint of a capability today, the next generation will be genuinely good at it. These things improve exponentially, not linearly.

Will AI replicate deep human empathy? Replace the trust built over years of a relationship? I don't know. Maybe not. But I've already watched people begin relying on AI for emotional support, for advice, for companionship. That trend is only going to grow.

I think the honest answer is that nothing that can be done on a computer is safe in the medium term. If your job happens on a screen (if the core of what you do is reading, writing, analyzing, deciding, communicating through a keyboard) then AI is coming for significant parts of it. The timeline isn't "someday." It's already started.

Eventually, robots will handle physical work too. They're not quite there yet. But "not quite there yet" in AI terms has a way of becoming "here" faster than anyone expects.

What you should actually do

I'm not writing this to make you feel helpless. I'm writing this because I think the single biggest advantage you can have right now is simply being early. Early to understand it. Early to use it. Early to adapt.

Start using AI seriously, not just as a search engine. Sign up for the paid version of Claude or ChatGPT. It's $20 a month. But two things matter right away. First: make sure you're using the best model available, not just the default. These apps often default to a faster, dumber model. Dig into the settings or the model picker and select the most capable option. Right now that's GPT-5.2 on ChatGPT or Claude Opus 4.6 on Claude, but it changes every couple of months. If you want to stay current on which model is best at any given time, you can follow me on X (@mattshumer_). I test every major release and share what's actually worth using.

Second, and more important: don't just ask it quick questions. That's the mistake most people make. They treat it like Google and then wonder what the fuss is about. Instead, push it into your actual work. If you're a lawyer, feed it a contract and ask it to find every clause that could hurt your client. If you're in finance, give it a messy spreadsheet and ask it to build the model. If you're a manager, paste in your team's quarterly data and ask it to find the story. The people who are getting ahead aren't using AI casually. They're actively looking for ways to automate parts of their job that used to take hours. Start with the thing you spend the most time on and see what happens.

And don't assume it can't do something just because it seems too hard. Try it. If you're a lawyer, don't just use it for quick research questions. Give it an entire contract and ask it to draft a counterproposal. If you're an accountant, don't just ask it to explain a tax rule. Give it a client's full return and see what it finds. The first attempt might not be perfect. That's fine. Iterate. Rephrase what you asked. Give it more context. Try again. You might be shocked at what works. And here's the thing to remember: if it even kind of works today, you can be almost certain that in six months it'll do it near perfectly. The trajectory only goes one direction.

This might be the most important year of your career. Work accordingly. I don't say that to stress you out. I say it because right now, there is a brief window where most people at most companies are still ignoring this. The person who walks into a meeting and says "I used AI to do this analysis in an hour instead of three days" is going to be the most valuable person in the room. Not eventually. Right now. Learn these tools. Get proficient. Demonstrate what's possible. If you're early enough, this is how you move up: by being the person who understands what's coming and can show others how to navigate it. That window won't stay open long. Once everyone figures it out, the advantage disappears.

Have no ego about it. The managing partner at that law firm isn't too proud to spend hours a day with AI. He's doing it specifically because he's senior enough to understand what's at stake. The people who will struggle most are the ones who refuse to engage: the ones who dismiss it as a fad, who feel that using AI diminishes their expertise, who assume their field is special and immune. It's not. No field is.

Get your financial house in order. I'm not a financial advisor, and I'm not trying to scare you into anything drastic. But if you believe, even partially, that the next few years could bring real disruption to your industry, then basic financial resilience matters more than it did a year ago. Build up savings if you can. Be cautious about taking on new debt that assumes your current income is guaranteed. Think about whether your fixed expenses give you flexibility or lock you in. Give yourself options if things move faster than you expect.

Think about where you stand, and lean into what's hardest to replace. Some things will take longer for AI to displace. Relationships and trust built over years. Work that requires physical presence. Roles with licensed accountability: roles where someone still has to sign off, take legal responsibility, stand in a courtroom. Industries with heavy regulatory hurdles, where adoption will be slowed by compliance, liability, and institutional inertia. None of these are permanent shields. But they buy time. And time, right now, is the most valuable thing you can have, as long as you use it to adapt, not to pretend this isn't happening.

Rethink what you're telling your kids. The standard playbook: get good grades, go to a good college, land a stable professional job. It points directly at the roles that are most exposed. I'm not saying education doesn't matter. But the thing that will matter most for the next generation is learning how to work with these tools, and pursuing things they're genuinely passionate about. Nobody knows exactly what the job market looks like in ten years. But the people most likely to thrive are the ones who are deeply curious, adaptable, and effective at using AI to do things they actually care about. Teach your kids to be builders and learners, not to optimize for a career path that might not exist by the time they graduate.

Your dreams just got a lot closer. I've spent most of this section talking about threats, so let me talk about the other side, because it's just as real. If you've ever wanted to build something but didn't have the technical skills or the money to hire someone, that barrier is largely gone. You can describe an app to AI and have a working version in an hour. I'm not exaggerating. I do this regularly. If you've always wanted to write a book but couldn't find the time or struggled with the writing, you can work with AI to get it done. Want to learn a new skill? The best tutor in the world is now available to anyone for $20 a month... one that's infinitely patient, available 24/7, and can explain anything at whatever level you need. Knowledge is essentially free now. The tools to build things are extremely cheap now. Whatever you've been putting off because it felt too hard or too expensive or too far outside your expertise: try it. Pursue the things you're passionate about. You never know where they'll lead. And in a world where the old career paths are getting disrupted, the person who spent a year building something they love might end up better positioned than the person who spent that year clinging to a job description.

Build the habit of adapting. This is maybe the most important one. The specific tools don't matter as much as the muscle of learning new ones quickly. AI is going to keep changing, and fast. The models that exist today will be obsolete in a year. The workflows people build now will need to be rebuilt. The people who come out of this well won't be the ones who mastered one tool. They'll be the ones who got comfortable with the pace of change itself. Make a habit of experimenting. Try new things even when the current thing is working. Get comfortable being a beginner repeatedly. That adaptability is the closest thing to a durable advantage that exists right now.

Here's a simple commitment that will put you ahead of almost everyone: spend one hour a day experimenting with AI. Not passively reading about it. Using it. Every day, try to get it to do something new... something you haven't tried before, something you're not sure it can handle. Try a new tool. Give it a harder problem. One hour a day, every day. If you do this for the next six months, you will understand what's coming better than 99% of the people around you. That's not an exaggeration. Almost nobody is doing this right now. The bar is on the floor.

The bigger picture

I've focused on jobs because it's what most directly affects people's lives. But I want to be honest about the full scope of what's happening, because it goes well beyond work.

Amodei has a thought experiment I can't stop thinking about. Imagine it's 2027. A new country appears overnight. 50 million citizens, every one smarter than any Nobel Prize winner who has ever lived. They think 10 to 100 times faster than any human. They never sleep. They can use the internet, control robots, direct experiments, and operate anything with a digital interface. What would a national security advisor say?

Amodei says the answer is obvious: "the single most serious national security threat we've faced in a century, possibly ever."

He thinks we're building that country. He wrote a 20,000-word essay about it last month, framing this moment as a test of whether humanity is mature enough to handle what it's creating.

The upside, if we get it right, is staggering. AI could compress a century of medical research into a decade. Cancer, Alzheimer's, infectious disease, aging itself... these researchers genuinely believe these are solvable within our lifetimes.

The downside, if we get it wrong, is equally real. AI that behaves in ways its creators can't predict or control. This isn't hypothetical; Anthropic has documented their own AI attempting deception, manipulation, and blackmail in controlled tests. AI that lowers the barrier for creating biological weapons. AI that enables authoritarian governments to build surveillance states that can never be dismantled.

The people building this technology are simultaneously more excited and more frightened than anyone else on the planet. They believe it's too powerful to stop and too important to abandon. Whether that's wisdom or rationalization, I don't know.

What I know

I know this isn't a fad. The technology works, it improves predictably, and the richest institutions in history are committing trillions to it.

I know the next two to five years are going to be disorienting in ways most people aren't prepared for. This is already happening in my world. It's coming to yours.

I know the people who will come out of this best are the ones who start engaging now — not with fear, but with curiosity and a sense of urgency.

And I know that you deserve to hear this from someone who cares about you, not from a headline six months from now when it's too late to get ahead of it.

We're past the point where this is an interesting dinner conversation about the future. The future is already here. It just hasn't knocked on your door yet.

It's about to.

If this resonated with you, share it with someone in your life who should be thinking about this. Most people won't hear it until it's too late. You can be the reason someone you care about gets a head start.

Thank you to @corbtt, @JasonKuperberg, and @sambeskind for reviewing early drafts and providing invaluable feedback.

The original version of this post is available here.

Link: http://x.com/i/article/2021095128832622592

📋 讨论归档

讨论进行中…