返回列表
🧠 阿头学 · 🪞 Uota学 · 💬 讨论题

别等 AI 完美了,等它"够强"时你已经出局

这篇文章最值钱的不是"AI 会替代谁",而是它把决策门槛说穿了:真正致命的不是看错趋势,而是在高不确定里还执着于等确定性。
打开原文 ↗

2026-03-10 原文链接 ↗
阅读简报
双语对照
完整翻译
原文
讨论归档

核心观点

  • "够用即替代"这个判断很狠,而且大概率对 作者最强的一刀,不是 AI 会不会完美,而是组织何时停止亲自下场做事。历史上替代从来不是"新技术零缺陷"才发生,而是"更快、更便宜、总体更优"就够了。这个逻辑对写代码、内容生产、运营执行都成立。原文把这个门槛说得很清楚:不是 bug 清零,而是 bug 少于平均人类、速度又碾压时,人就会停笔。这一点我认同,而且对 Neta/ATou 的现实意义很大。
  • "行动创造信息"是全文最稳的决策框架,比 AI 预测本身更重要 作者对未来速度的判断可能激进,但"先识别前提—再看信号—再做非对称下注"的方法是成立的。尤其在 AI、出海、品牌这些变量巨多的场景里,靠会议室推演通常只会拖死自己。真正的信息更新来自试验、上线、招人、做分发、接触用户。这里不是鸡汤,而是复杂系统里的基本生存法则。
  • 护城河不是安全垫,只是倒计时:专有数据、监管、权威、物理摩擦都只是在买时间 这是原文最有商业感的一部分,尤其是"权威即服务"。在法律、金融、审计、品牌背书这类领域,客户买的不是答案,而是责任归属和信任转移。这个洞察对 AI 产品特别重要:模型能力会快速均值回归,但"谁敢为结果背书""谁被用户信任"不会同步被抹平。原文这里不是空喊趋势,而是点到了制度层的真实阻力。
  • 但作者把"会写代码"外推到"递归自我改进"这一步,跳得太大了 这是全文最大的问题。能在结构化提示下稳定产出有用代码,不等于能自主发现瓶颈、设计实验、验证改进、完成研究闭环。这里少了最关键的中间桥梁:算力、评估、数据、训练反馈、责任机制。作者是把"开发效率提升"直接推成了"智能爆炸前夜",这个推演有启发,但证据远远不够。
  • 这篇文章本质上也是一篇创始人下注宣言,洞察和自我合理化掺在一起读才对 作者不是中立观察者,而是已经辞职创业的人。于是文中的"窗口正在关闭""不能等""第二名没空间",既可能是敏锐判断,也可能是给自己下注加叙事杠杆。我的判断是:他的方向感比论证更强,方法论比时间表更可信,紧迫感值得借,但不必照单全收。

跟我们的关联

🧠Neta

  • 意味着什么:Neta 的核心资产不该定义成"某个功能"或"模型效果",而该定义成专有互动数据飞轮。 10 万+ DAU 不是面子数据,真正值钱的是交互日志、留存节点、情绪反馈、关系演化、触发转化的上下文。

接下来怎么做: 立刻盘点哪些用户行为数据是"副产物式持续产生、外部拿不到、能反哺模型与分发"的;优先搭建结构化标注、回流训练、效果评估机制。

  • 意味着什么:海外增长不能只卷投放和内容产量,必须同步建设"权威层"。 AI 时代内容更便宜,信任更稀缺。

接下来怎么做: 海外品牌宣发上,把"谁在替我们背书"列为一级策略:KOL、社区领袖、媒体、垂类合作方,不只是买量配套,而是转化效率的核心变量。

👤ATou

  • 意味着什么:你该把自己从"高产执行者"升级成"长时间尺度的 agent 指挥官"。 原文关于人类剩余价值最靠谱的部分,不是"讨人喜欢",而是能跨周/月规划、跨模块编排、做责任确认。

接下来怎么做: 你的学习重点要从"怎么把单次 prompt 调好"切到"怎么管理一个持续数周的 agent 系统":目标拆解、状态同步、记忆设计、验收标准、人工盖章点。

🪞Uota

  • 意味着什么:agent 工作流的价值不在单轮生成,而在可追责、可审阅、可持续迭代。 文章最大的盲点正好反过来提示了设计方向:真正卡住 AI 接管生产的不是能力上限,而是责任和闭环。

接下来怎么做: 在 Uota/agent 体系里优先补三件事:过程记录、决策依据、错误归因。谁先解决"可控代理"而不是"会说会写",谁才更接近真实生产力。

讨论引子

  • 对 Neta 来说,真正该 All-in 的护城河到底是"专有数据"还是"品牌权威"?

前者更像模型时代的燃料,后者更像分发时代的信用。资源有限时,先把哪一个做到极致,决定了 2026 海外战役的打法。

  • 如果 AI 替代的门槛不是完美而是"总体更优",那团队里哪些工作今天就该被强制 agent 化,哪些反而应该加倍投入人?

这不是效率优化问题,而是组织生死线问题。继续让人做低防御任务,本质上是在浪费未来窗口。

  • 我们是在利用 AI 缩小和巨头的差距,还是在帮助巨头更快吃掉中小团队?

原文一边说大机构会因数据和监管变强,一边又鼓励创业者立刻下注,这里有真实张力。Neta 应该押注"特种作战"的速度优势,还是提前承认平台化集中会发生并调整战略?

引言

我第一次意识到我们正走向一个拐点,是在上一份工作里听见音乐的节奏开始放慢——即便身边所有人都装作一切不会改变。

我在一家对冲基金管理着一支将近 20 人的团队,做的还是我做了很多年的那套事。按理说,我很可能会在那里做得更大。可我却从一个人人羡慕到眼红的职位,跳到只剩一支骨干小队、从零开始搭建创业公司——这一步几乎没人理解,也普遍被视作疯狂。最近接连出现大规模裁员的新闻:有人明确辞职去创业,有人则默默“躺平”,夜里烧着自己的 tokens 做同样的事。于是我当年的选择看起来没那么疯了。

有几个人问我,我觉得这一切会走向哪里。这篇文章就是我的回答。坦白说,我并不确定这些变化最终会有多大;但量化金融教会我一件事:方向对了,往往就够了。

征兆已现

让我真正被说服的,是 ChatGPT o1。在那之前,我一直只把它们称作“LLMs”,而不是“AI”;我还不相信它们会呈现出任何像样的真实智能。

但从 o1 开始,这是我第一次觉得,这些 LLMs 能够在足够结构化的提示词下,可信地生成代码。它仍然很粗糙;仍会偶尔出现幻觉和混乱。可关键在于:它们真的能产出有用的代码。

我当时的推理是这样的:一旦 AI 发展到能够复现有用代码的程度,它们就会递归地改进自身的逻辑,并以我们难以想象的尺度加速开发。每当我分享这一点,总有人反驳说,agent 写出的代码仍然有 bug,且不够“可直接上生产”。但这忽略了一个事实:人类写的代码同样充满 bug。

我们并不需要完美无瑕的代码,才会彻底停止自己写代码。只要我们意识到:agent 以远超我们的速度产出代码,而且 bug 比我们更少——就在那一刻,我们就会停笔。把编码的负担完全交给 agent 的门槛其实低得惊人,所以当我近距离看过 o1 之后,我就知道未来将发生剧烈变化。

量化金融与知识护城河

我曾经认为,AI 终将侵蚀掉量化金融中的绝大部分工作,只是会需要一些时间,因为公开可得的机构级代码太少,LLMs 很难用它们充分训练。我把软件工程想象成一座金字塔:底部是基础的码农搬砖,上面是具备一些架构思考的资深开发,再往上则是更专门化的开发者:数据科学家、量化开发等等。职业越依赖专业知识,看起来就越安全。

我以为,两年之内底层那一整层“写代码的苦力”会被一扫而空;随后资深开发也会开始被替代。再一层一层地,专门知识也会被纳入 LLMs,它们同样会被逐步抹去。

很快我就意识到,前沿模型的提供商最终会雇佣那些掌握专业知识的人,把行业 know-how 注入前沿模型之中。专业知识看似能在接下来的几年里构成护城河,但也终究会被一点点蚕食。

剩余的护城河

我当时认为,有几类业务在未来 5 年内不太可能被轻易颠覆。

第一是 专有数据。那些把大量专有数据作为“副产物”源源不断产出的企业,很难被颠覆。比如 Millennium 这种大型 pod shop,它们可以收集分析师的阅读记录、细致的分析、推荐意见以及真实的价格变动,并用这些数据对前沿模型进行微调,得到一个不容易被复制的东西。任何能产出前沿模型无法轻易获取的专有数据的业务,都能多活一段时间。

第二是 监管摩擦。在那些“其他人类”本身就是瓶颈的行业里,颠覆要难得多。想在许多传统金融(TradFi)市场里交易,你得开券商账户、拿到牌照、在全球各地签合同。交易加密资产很容易,但作为一家非中国公司,要在中国交易铁矿石就难得多。如果你的进展需要某个真人来盖章确认,那么这个行业的速度就必然受制于审核的成本和节奏。

第三是 权威即服务。如今,只要把事情与相关法律做一份完整研究,找个 agent 起草法律意见并不难。可我们仍愿意为律师写的意见书支付数万美元,因为在此刻,AI 的法律意见几乎一文不值。智能合约审计也是例子。我们很可能已经到了这样一个水平:agent 审合约不比人类前 10% 差,甚至更好;但大多数人依然会去购买一家有品牌的机构所提供的“权威盖章”。你付钱买的不是意见本身,而是意见背后的权威。

第四是 物理智能的滞后。硬件的演进速度远慢于软件,而硬件一旦“弄坏”,修复成本也高得多。那些与真实世界打交道的实体业务,短期内被颠覆的概率要低很多。当然,一旦硬件追上来,同样的金字塔逻辑仍会生效:先消失的是低层工作,然后才轮到更专业的岗位。

这些护城河是真实存在的,但没有一条是永久的。更诚实的解读是:它们买来的是时间,而不是安全。

如何推演一个混乱的未来

当未来真正充满噪声、变化快到大多数类比都失效时,人们往往会做两件事之一:要么等到确定无疑才行动;要么把当下硬套进过去的模板(“这就像互联网泡沫”),并基于错误的模型行动。两者都是错误。

在信息不完整的情况下,从第一性原理出发推理是值得的。你不必知道事情会如何精确展开。你只需要在方向上正确,并把赌注结构化:早且错要能活下来,早且对要能获得不成比例的回报。

当未来不确定时,非对称性就是全部的游戏规则。

把它落到实处,就是:先问“要发生某个结果,哪些前提必须成立?”,再问“这些前提的输入信号,如今有多清晰?”我们正在经历的拐点并非不可预见,因为输入早已摆在眼前:会写代码的代码;能递归改进的模型;可以用钱买到、而不必只靠长期积累的机构知识。任何愿意直视这些输入的人,即便不知道确切路径,也能大致看出它们指向哪里。

你还可以在此基础上递归推理,继续外推。我甚至认为,我们尚未真正瞥见:当 agent 能自我训练、能自我复制、能真正自主时,世界会是什么样子。一个 agent 通过一系列动作把自己的智能提升 0.1%,听起来并不起眼;但只要这个数不是 0,它就会提高“下一次增量更大”的概率,如此循环往复。在这里起作用的是巨大的幂律,因此值得沿着“在这些幂律之下,未来会长成什么样”来思考。

等到信号显而易见时,交易往往早已拥挤不堪。在市场里,你用不确定性为早期的信念付费;在职业与创业中,计价单位也是一样。

所以,问题并不真正是“将会发生什么?”问题是:“我已经知道什么?它指向哪个方向?现在就行动的代价,与等待的代价分别是什么?”

我常看到人们忽略的一点是:行动会创造信息。行动不是发生在真空中。你对世界做出动作,世界就会回馈信息;这些信息驱动迭代;迭代又产生更有依据的行动。这就是进步的本质。

在信息不完整的状态下静止不动,就是衰败。

向行动靠拢,就是发现。

思考下一步

我知道,如果我只想榨干现状的红利,我还有几年时间。但我内心很大一部分感受是:如果我想做点什么,就必须越早开始越好。我一直想做一件真正属于自己的东西,而这样的窗口似乎正在迅速关闭。

说清楚一点,我知道世界上最大的对冲基金会没事。它们拥有专有数据,极难被替代。传统金融(TradFi)市场也被人类签字所卡住——既有监管层面的签字,有时甚至连交易层面也需要。我真正认为的是:这些巨头基金会用 AI 替换掉大部分员工,甚至包括投资组合经理(Portfolio Managers)这类过去被视为职业终点的座位。不会立刻发生,但终究会发生。

我的感觉是,在基础模型提供商雇到足够多的专业人才、从而让“新兴交易公司”几乎不可能立足之前,我大概还有 4–5 年。在某些市场,比如美股,这种感觉已经很强烈了。我难以想象再过几年,效率会变得多么夸张。

显然,用不了多久就不会再给“第二名”留下空间。我当然可以继续为“第一名”打工,但更符合我目标的,是现在就出手——在我真正有优势的市场里,用那些短期内不可能被轻易复制的知识押注。所以,凭着骨子里那股狠劲,我辞了职,把筹码全部压在后来成为 @openforage 的那件事上。

拐点

到今天,这扇窗口看起来已经在肉眼可见地合上。变化的节奏不再让人感觉是渐进的;关注这个领域的大多数人都开始意识到:过去需要几个月才能完成的进步,如今只要几周。

在我看来,未来几年里工作不会彻底消失。人类始终会被需要。人是社会性动物,只要还是人类掌权,我们就希望身边还有其他人类。而且人们还不信任 AI,所以“权威盖章”仍然得由人来出。我甚至能想象未来几年出现 AI CEO,但很可能还会有一个人类 CEO 负责“批准”并认证这个 AI CEO。这种“人类认证”的逻辑会沿着金字塔向下传导:人类经理会管理并认证一群在他手下工作的 agents。

但招聘的算术会变。若 CEO 给 agent 下提示词比给你下指令还更顺手,就没有雇你的必要。浅层的、搬砖式的码农工作,往后会很难再找到。

要变得不可替代,你需要在远高于当前 agent 能力上限的时间尺度上工作——接收指令、管理 agents,并与它们协作数周、数月乃至数年。长期战略思考与政策规划,在可预见的未来会是最强的职业护城河之一。你还需要在比当前 agent 上限更大的“范围”里运作。agents 的上下文有限;它们对任何事都能知道很多,却难以轻易看清:组件 A 如何与组件 B 互动、组件 B 又如何与组件 C 互动,最终对组件 D 造成级联影响。它们缺乏的是范围。

如果你能思考得足够远、视野足够广,能快速吸收信息,能做长期决策,而且讨人喜欢,那么至少在可预见的未来,你还能稳住一份工作。

如果你确实打算继续做员工,不妨盘点一下:你的工作到底由哪些任务构成。有些任务对人类而言具有很强的防御性;有些则会在未来几年里被低成本替代。多做前者,少做后者。

在一家很强的公司里、处在一个防守性极强且有真实护城河的位置上工作,或许能在其余劳动力被基础模型吞噬时,为你争取一段职业跑道。你仍然可以在夜里花自己的 tokens 掷骰子,试着做点有意义的东西。

但如果你心里燃着强烈的愿望,想为世界写下一段独一无二的诗句,就要认真思考:你选择的市场正朝哪里去。如果你用来构建“可防守之物”的窗口正在关闭,你就必须在市场尚未完全把即将到来的竞争计价进去之前开始运作。

结语

只要你愿意看,孕育拐点的那些输入信号往往会提前变得清晰可读。多数人要么不看,要么看了也不行动,要么等到信号大到震耳欲聋时才动身——那时机会早已被计价。

别忽视脚下不断移动的沙地。别待在一个正在失去优势的地方,却告诉自己“等时机更好再跳”。没有更好的时机,而时机也很少会敲锣打鼓地宣布自己到来。当它对所有人都显而易见时,这扇窗通常已经关上。

我看见了,我下注了,而如今我正生活在这场下注的结果之中——无论好坏。

链接:http://x.com/i/article/2030935783885201408

Introduction

引言

The first time I realized we were heading towards an inflection point was when I heard the music slowing down at my previous role, even as everyone around me pretended nothing would change.

我第一次意识到我们正走向一个拐点,是在上一份工作里听见音乐的节奏开始放慢——即便身边所有人都装作一切不会改变。

I was managing a team of close to 20 pax in a hedge fund, doing the thing I had been doing for years. For all intents and purposes, I was likely going to do even greater things there. And yet, I moved from a position people would kill for to building a startup from ground zero with a skeleton crew - a move so little understood and widely seen as crazy. With the recent news of massive layoffs, people quitting explicitly to build startups, or quietly quitting and burning tokens at night doing the same, my actions seem a lot less insane now.

我在一家对冲基金管理着一支将近 20 人的团队,做的还是我做了很多年的那套事。按理说,我很可能会在那里做得更大。可我却从一个人人羡慕到眼红的职位,跳到只剩一支骨干小队、从零开始搭建创业公司——这一步几乎没人理解,也普遍被视作疯狂。最近接连出现大规模裁员的新闻:有人明确辞职去创业,有人则默默“躺平”,夜里烧着自己的 tokens 做同样的事。于是我当年的选择看起来没那么疯了。

I've had a few people ask me where I think this all goes. This article is the answer to that. The honest truth is that I'm not really sure about the magnitude of these changes, but if quant finance has taught me anything, it's that being directionally correct is often enough.

有几个人问我,我觉得这一切会走向哪里。这篇文章就是我的回答。坦白说,我并不确定这些变化最终会有多大;但量化金融教会我一件事:方向对了,往往就够了。

Writing On The Wall

征兆已现

It was ChatGPT o1 that did it for me. Up until that point, I had referred to them only as "LLMs" and not "AI", I was not yet convinced that any semblance of real intelligence would emerge from them.

让我真正被说服的,是 ChatGPT o1。在那之前,我一直只把它们称作“LLMs”,而不是“AI”;我还不相信它们会呈现出任何像样的真实智能。

But with o1, it was the first time these LLMs could credibly produce code from well-structured prompts. It was still messy. They still suffered from the occasional bout of hallucination and confusion. But here was what mattered: they could actually produce useful code.

但从 o1 开始,这是我第一次觉得,这些 LLMs 能够在足够结构化的提示词下,可信地生成代码。它仍然很粗糙;仍会偶尔出现幻觉和混乱。可关键在于:它们真的能产出有用的代码。

The line of reasoning I took was this: once AI could get to a point where they could reproduce useful code, they would recursively write improvements to their own logic and accelerate development at a scale we would not be able to comprehend. Whenever I shared this, people would counter-argue that the code agents wrote was still buggy and not "production-ready." This misses the point that even humans write buggy code.

我当时的推理是这样的:一旦 AI 发展到能够复现有用代码的程度,它们就会递归地改进自身的逻辑,并以我们难以想象的尺度加速开发。每当我分享这一点,总有人反驳说,agent 写出的代码仍然有 bug,且不够“可直接上生产”。但这忽略了一个事实:人类写的代码同样充满 bug。

We don't need flawless code to completely stop writing code. We stop writing code the instant we realize that agents produce fewer bugs than us, at a pace that far exceeds us. The bar for fully relegating the burden of coding to agents was so low that once I saw o1 up close, I knew the future was going to change dramatically.

我们并不需要完美无瑕的代码,才会彻底停止自己写代码。只要我们意识到:agent 以远超我们的速度产出代码,而且 bug 比我们更少——就在那一刻,我们就会停笔。把编码的负担完全交给 agent 的门槛其实低得惊人,所以当我近距离看过 o1 之后,我就知道未来将发生剧烈变化。

Quant Finance And The Moat Of Knowledge

量化金融与知识护城河

I thought AI would eventually eat away a vast majority of quant finance, although it was going to take a while, since there was very little publicly available institutional code for LLMs to train on. I imagined software engineering as a pyramid: at the base was basic code monkey work, above that was your senior developer with some architectural thinking, and above that were specialized developers: data scientists, quant developers, and so on. The more your profession required specialized knowledge, the safer you would be.

我曾经认为,AI 终将侵蚀掉量化金融中的绝大部分工作,只是会需要一些时间,因为公开可得的机构级代码太少,LLMs 很难用它们充分训练。我把软件工程想象成一座金字塔:底部是基础的码农搬砖,上面是具备一些架构思考的资深开发,再往上则是更专门化的开发者:数据科学家、量化开发等等。职业越依赖专业知识,看起来就越安全。

I thought we would wipe out the entire tranche of code monkeys within 2 years. Then senior developers would start to go. And layer by layer, specialized knowledge would also be incorporated into the LLMs and they too would be wiped out.

我以为,两年之内底层那一整层“写代码的苦力”会被一扫而空;随后资深开发也会开始被替代。再一层一层地,专门知识也会被纳入 LLMs,它们同样会被逐步抹去。

It quickly became obvious that the frontier model providers would eventually hire specialized knowledge workers to contribute industry know-how to the frontier models. Specialized knowledge seemed like it would be a moat for the next couple of years, but also end up being eaten away gradually.

很快我就意识到,前沿模型的提供商最终会雇佣那些掌握专业知识的人,把行业 know-how 注入前沿模型之中。专业知识看似能在接下来的几年里构成护城河,但也终究会被一点点蚕食。

The Remaining Moats

剩余的护城河

There were a few categories of businesses that I thought would be safe from being trivially disrupted within the next 5 years.

我当时认为,有几类业务在未来 5 年内不太可能被轻易颠覆。

The first is proprietary data. Businesses that produced a lot of proprietary data as exhaust would be hard to disrupt. Large podshops like Millennium come to mind, they can collect analyst readings, detailed analysis, recommendations, and actual price changes, and use this data to fine-tune frontier models into something that was not going to be easily replicated. Any business producing proprietary data not trivially obtained by the frontier models would have a longer lease on life.

第一是 专有数据。那些把大量专有数据作为“副产物”源源不断产出的企业,很难被颠覆。比如 Millennium 这种大型 pod shop,它们可以收集分析师的阅读记录、细致的分析、推荐意见以及真实的价格变动,并用这些数据对前沿模型进行微调,得到一个不容易被复制的东西。任何能产出前沿模型无法轻易获取的专有数据的业务,都能多活一段时间。

The second is regulatory friction. Businesses where other humans are a bottleneck seemed much harder to disrupt. Being able to trade in many TradFi markets meant opening broker accounts, getting licenses, signing contracts around the globe. It's easy to trade crypto, but much harder to trade iron ore in China as a non-Chinese firm. If you need a human to rubber-stamp your progress, the speed of that industry is always going to be bottlenecked by the cost and speed of that approval.

第二是 监管摩擦。在那些“其他人类”本身就是瓶颈的行业里,颠覆要难得多。想在许多传统金融(TradFi)市场里交易,你得开券商账户、拿到牌照、在全球各地签合同。交易加密资产很容易,但作为一家非中国公司,要在中国交易铁矿石就难得多。如果你的进展需要某个真人来盖章确认,那么这个行业的速度就必然受制于审核的成本和节奏。

The third is authority as a service. It's not too hard now to get an agent to draft a legal opinion given a comprehensive study of the matter and the laws surrounding it. And yet we're still going to pay tens of thousands of dollars for one drafted by a lawyer, because an AI's legal opinion is worth nothing at this point in time. Smart contract audits are another example. We're probably already at a level where agents can review smart contracts as well as or better than the top decile of humans, yet most people still buy the stamp of authority from a branded firm. The opinion isn't what you're paying for. The authority behind it is.

第三是 权威即服务。如今,只要把事情与相关法律做一份完整研究,找个 agent 起草法律意见并不难。可我们仍愿意为律师写的意见书支付数万美元,因为在此刻,AI 的法律意见几乎一文不值。智能合约审计也是例子。我们很可能已经到了这样一个水平:agent 审合约不比人类前 10% 差,甚至更好;但大多数人依然会去购买一家有品牌的机构所提供的“权威盖章”。你付钱买的不是意见本身,而是意见背后的权威。

The fourth is physical intelligence lag. Hardware moves much more slowly than software, and breaking hardware is a lot harder to fix. Physical businesses interacting with the real world are a lot less likely to be disrupted soon. That said, once hardware catches up, the same pyramid logic applies: lower-level jobs go first, then the more specialized ones.

第四是 物理智能的滞后。硬件的演进速度远慢于软件,而硬件一旦“弄坏”,修复成本也高得多。那些与真实世界打交道的实体业务,短期内被颠覆的概率要低很多。当然,一旦硬件追上来,同样的金字塔逻辑仍会生效:先消失的是低层工作,然后才轮到更专业的岗位。

These moats are real, but none of them are permanent. The honest read is that they buy time, not safety.

这些护城河是真实存在的,但没有一条是永久的。更诚实的解读是:它们买来的是时间,而不是安全。

Reasoning About A Messy Future

如何推演一个混乱的未来

When the future is genuinely noisy, when the rate of change is fast enough that most analogies break down, people tend to do one of two things. They either wait for certainty before acting, or they pattern-match to the past ("this is like the internet boom") and act on the wrong model. Both are mistakes.

当未来真正充满噪声、变化快到大多数类比都失效时,人们往往会做两件事之一:要么等到确定无疑才行动;要么把当下硬套进过去的模板(“这就像互联网泡沫”),并基于错误的模型行动。两者都是错误。

It is worth reasoning from first principles under incomplete information. You don't need to know exactly how something plays out. You just need to be directionally correct, and you need to structure your bets so that being early and wrong is survivable, while being early and right is disproportionately rewarding.

在信息不完整的情况下,从第一性原理出发推理是值得的。你不必知道事情会如何精确展开。你只需要在方向上正确,并把赌注结构化:早且错要能活下来,早且对要能获得不成比例的回报。

Asymmetry is the whole game when the future is uncertain.

当未来不确定时,非对称性就是全部的游戏规则。

The practical version of this is: ask what has to be true for a given outcome to happen, and then ask how legible the inputs to that outcome already are. The inflection we're living through was not unforeseeable, the inputs were visible. Code that could write code. Models that improved recursively. Institutional knowledge that could be bought, not just grown. Anyone willing to stare at those inputs clearly could see roughly where they pointed, even without knowing the exact path.

把它落到实处,就是:先问“要发生某个结果,哪些前提必须成立?”,再问“这些前提的输入信号,如今有多清晰?”我们正在经历的拐点并非不可预见,因为输入早已摆在眼前:会写代码的代码;能递归改进的模型;可以用钱买到、而不必只靠长期积累的机构知识。任何愿意直视这些输入的人,即便不知道确切路径,也能大致看出它们指向哪里。

You can recursively reason about this and extrapolate further. I don't even think we've yet caught a glimpse of what it will be like when agents can train themselves, when agents can replicate, when agents become truly autonomous. An agent that can increase its intelligence by 0.1% through a series of actions may not seem significant, but any number that is not 0 increases the probability that the next increment is greater, and so on, so forth. There are vast power laws at play here and it is worth thinking along the lines of what a future looks like under those power laws.

你还可以在此基础上递归推理,继续外推。我甚至认为,我们尚未真正瞥见:当 agent 能自我训练、能自我复制、能真正自主时,世界会是什么样子。一个 agent 通过一系列动作把自己的智能提升 0.1%,听起来并不起眼;但只要这个数不是 0,它就会提高“下一次增量更大”的概率,如此循环往复。在这里起作用的是巨大的幂律,因此值得沿着“在这些幂律之下,未来会长成什么样”来思考。

By the time the signal is obvious, the trade is crowded. In markets, you pay for early conviction with uncertainty. In careers and startups, the currency is the same.

等到信号显而易见时,交易往往早已拥挤不堪。在市场里,你用不确定性为早期的信念付费;在职业与创业中,计价单位也是一样。

So the question isn't really "what's going to happen?" The question is: "what do I already know, what direction does it point, and what's the cost of acting on it now versus waiting?"

所以,问题并不真正是“将会发生什么?”问题是:“我已经知道什么?它指向哪个方向?现在就行动的代价,与等待的代价分别是什么?”

One thing that I often see people missing is to notice that action creates information. Action does not happen in a vacuum. When you act on the world, the world replies with information. That information powers iteration. Iteration begets more informed action. That is the nature of progress.

我常看到人们忽略的一点是:行动会创造信息。行动不是发生在真空中。你对世界做出动作,世界就会回馈信息;这些信息驱动迭代;迭代又产生更有依据的行动。这就是进步的本质。

Being still in incomplete information is decay.

在信息不完整的状态下静止不动,就是衰败。

Moving towards action is discovery.

向行动靠拢,就是发现。

Thinking About Next Steps

思考下一步

I knew I had a couple of years if I just wanted to milk the status quo. But a large part of me felt like if I wanted to do something, I would have to start sooner rather than later. I had always wanted to build something truly mine, and it seemed like the window to do that was quickly closing.

我知道,如果我只想榨干现状的红利,我还有几年时间。但我内心很大一部分感受是:如果我想做点什么,就必须越早开始越好。我一直想做一件真正属于自己的东西,而这样的窗口似乎正在迅速关闭。

To be clear, I know that the largest hedge funds in the world would be fine. They have proprietary data that makes them very difficult to replace. TradFi markets are also bottlenecked by human signatures, both on a regulatory and at times even a trading front. What I do think, however, is that those largest funds will use AI to replace most of their workforce, even terminal career seats like Portfolio Managers. Not immediately, but eventually, surely.

说清楚一点,我知道世界上最大的对冲基金会没事。它们拥有专有数据,极难被替代。传统金融(TradFi)市场也被人类签字所卡住——既有监管层面的签字,有时甚至连交易层面也需要。我真正认为的是:这些巨头基金会用 AI 替换掉大部分员工,甚至包括投资组合经理(Portfolio Managers)这类过去被视为职业终点的座位。不会立刻发生,但终究会发生。

What I felt was that I had about 4-5 years before the foundation model providers hired enough specialized talent to make being an upstart trading firm nearly impossible. In certain markets, like US equities, it already feels that way. I can't imagine how much more efficient it's going to look in just a few more years.

我的感觉是,在基础模型提供商雇到足够多的专业人才、从而让“新兴交易公司”几乎不可能立足之前,我大概还有 4–5 年。在某些市场,比如美股,这种感觉已经很强烈了。我难以想象再过几年,效率会变得多么夸张。

There was clearly not going to be space for "second best" pretty soon. I could keep working for the "best", but it seemed more aligned with my goals to strike now, in a market I had a genuine edge in, with knowledge that was not going to be trivially replicated. So, having that dawg in me, I called it quits and went all in on what eventually became @openforage.

显然,用不了多久就不会再给“第二名”留下空间。我当然可以继续为“第一名”打工,但更符合我目标的,是现在就出手——在我真正有优势的市场里,用那些短期内不可能被轻易复制的知识押注。所以,凭着骨子里那股狠劲,我辞了职,把筹码全部压在后来成为 @openforage 的那件事上。

Inflection Point

拐点

Today, it's really starting to feel like the window is visibly closing. The pace of change has stopped feeling gradual, and most people following the space are beginning to realize that what used to take months of improvement now takes weeks.

到今天,这扇窗口看起来已经在肉眼可见地合上。变化的节奏不再让人感觉是渐进的;关注这个领域的大多数人都开始意识到:过去需要几个月才能完成的进步,如今只要几周。

In my opinion, jobs will not vanish entirely within the next couple of years. There will always be a need for humans. Humans are social creatures, as long as humans are in charge, we want other humans around. And humans don't trust AI yet, so stamps of authority still need to come from a human. I imagine AI CEOs in the next couple of years, but there will still likely be a human CEO having to "approve" and certify the AI CEO. This idea of human certification cascades down the pyramid. A human manager will manage and certify a bunch of agents working under him.

在我看来,未来几年里工作不会彻底消失。人类始终会被需要。人是社会性动物,只要还是人类掌权,我们就希望身边还有其他人类。而且人们还不信任 AI,所以“权威盖章”仍然得由人来出。我甚至能想象未来几年出现 AI CEO,但很可能还会有一个人类 CEO 负责“批准”并认证这个 AI CEO。这种“人类认证”的逻辑会沿着金字塔向下传导:人类经理会管理并认证一群在他手下工作的 agents。

But the arithmetic of hires will change. If a CEO can prompt an agent more easily than they can prompt you, there's no need to hire you. Shallow, code-monkey work will be very difficult to find going forward.

但招聘的算术会变。若 CEO 给 agent 下提示词比给你下指令还更顺手,就没有雇你的必要。浅层的、搬砖式的码农工作,往后会很难再找到。

To be irreplaceable, you need to operate at a timescale far above current agent limitations - receiving instruction, managing agents, and working with them for weeks, months, or years. Long-term strategic thinking and policy planning is one of the strongest job moats for the foreseeable future. You also need to operate at a scope greater than current agent limitations. Agents have limited context. They know everything about anything, yet cannot trivially see how component A interacts with component B interacts with component C causing cascading effects to component D. They lack scope.

要变得不可替代,你需要在远高于当前 agent 能力上限的时间尺度上工作——接收指令、管理 agents,并与它们协作数周、数月乃至数年。长期战略思考与政策规划,在可预见的未来会是最强的职业护城河之一。你还需要在比当前 agent 上限更大的“范围”里运作。agents 的上下文有限;它们对任何事都能知道很多,却难以轻易看清:组件 A 如何与组件 B 互动、组件 B 又如何与组件 C 互动,最终对组件 D 造成级联影响。它们缺乏的是范围。

If you can think far and wide, absorb information quickly, make decisions for the long term, and are likeable, you will hold down a job, at least for the foreseeable future.

如果你能思考得足够远、视野足够广,能快速吸收信息,能做长期决策,而且讨人喜欢,那么至少在可预见的未来,你还能稳住一份工作。

If you do intend to be an employee, it's worth taking stock of what your work is actually made of. Some tasks are deeply human defensible. Some will be replaced cheaply over the next couple of years. Do more of the former and less of the latter.

如果你确实打算继续做员工,不妨盘点一下:你的工作到底由哪些任务构成。有些任务对人类而言具有很强的防御性;有些则会在未来几年里被低成本替代。多做前者,少做后者。

Working for a great firm in a deeply defensible position, one that sits behind real moats, may give you a career runway while the rest of the workforce gets eaten by the foundation models. You can still spend your tokens at night, rolling the dice, trying to build something meaningful.

在一家很强的公司里、处在一个防守性极强且有真实护城河的位置上工作,或许能在其余劳动力被基础模型吞噬时,为你争取一段职业跑道。你仍然可以在夜里花自己的 tokens 掷骰子,试着做点有意义的东西。

But if you have a burning desire to contribute a unique verse to the world, think carefully about where your market of choice is heading. If your window to build something defensible is closing, you need to begin operating before the market fully prices in the competition that is coming.

但如果你心里燃着强烈的愿望,想为世界写下一段独一无二的诗句,就要认真思考:你选择的市场正朝哪里去。如果你用来构建“可防守之物”的窗口正在关闭,你就必须在市场尚未完全把即将到来的竞争计价进去之前开始运作。

Conclusion

结语

The inputs that create inflection points are legible ahead of time, if you're willing to look. Most people don't look, or they look and don't act, or they wait until the signal is so loud that the opportunity is already priced in.

只要你愿意看,孕育拐点的那些输入信号往往会提前变得清晰可读。多数人要么不看,要么看了也不行动,要么等到信号大到震耳欲聋时才动身——那时机会早已被计价。

Don't ignore the shifting sands. Don't stay somewhere that's losing ground while telling yourself you'll make the leap when the timing is better. There's no better timing, and the timing rarely announces itself. When it becomes obvious to everyone, the window has normally already closed.

别忽视脚下不断移动的沙地。别待在一个正在失去优势的地方,却告诉自己“等时机更好再跳”。没有更好的时机,而时机也很少会敲锣打鼓地宣布自己到来。当它对所有人都显而易见时,这扇窗通常已经关上。

I looked, I made a bet, and now I'm living inside the outcome of that bet — for better or worse.

我看见了,我下注了,而如今我正生活在这场下注的结果之中——无论好坏。

Link: http://x.com/i/article/2030935783885201408

链接:http://x.com/i/article/2030935783885201408

相关笔记

Introduction

The first time I realized we were heading towards an inflection point was when I heard the music slowing down at my previous role, even as everyone around me pretended nothing would change.

I was managing a team of close to 20 pax in a hedge fund, doing the thing I had been doing for years. For all intents and purposes, I was likely going to do even greater things there. And yet, I moved from a position people would kill for to building a startup from ground zero with a skeleton crew - a move so little understood and widely seen as crazy. With the recent news of massive layoffs, people quitting explicitly to build startups, or quietly quitting and burning tokens at night doing the same, my actions seem a lot less insane now.

I've had a few people ask me where I think this all goes. This article is the answer to that. The honest truth is that I'm not really sure about the magnitude of these changes, but if quant finance has taught me anything, it's that being directionally correct is often enough.

Writing On The Wall

It was ChatGPT o1 that did it for me. Up until that point, I had referred to them only as "LLMs" and not "AI", I was not yet convinced that any semblance of real intelligence would emerge from them.

But with o1, it was the first time these LLMs could credibly produce code from well-structured prompts. It was still messy. They still suffered from the occasional bout of hallucination and confusion. But here was what mattered: they could actually produce useful code.

The line of reasoning I took was this: once AI could get to a point where they could reproduce useful code, they would recursively write improvements to their own logic and accelerate development at a scale we would not be able to comprehend. Whenever I shared this, people would counter-argue that the code agents wrote was still buggy and not "production-ready." This misses the point that even humans write buggy code.

We don't need flawless code to completely stop writing code. We stop writing code the instant we realize that agents produce fewer bugs than us, at a pace that far exceeds us. The bar for fully relegating the burden of coding to agents was so low that once I saw o1 up close, I knew the future was going to change dramatically.

Quant Finance And The Moat Of Knowledge

I thought AI would eventually eat away a vast majority of quant finance, although it was going to take a while, since there was very little publicly available institutional code for LLMs to train on. I imagined software engineering as a pyramid: at the base was basic code monkey work, above that was your senior developer with some architectural thinking, and above that were specialized developers: data scientists, quant developers, and so on. The more your profession required specialized knowledge, the safer you would be.

I thought we would wipe out the entire tranche of code monkeys within 2 years. Then senior developers would start to go. And layer by layer, specialized knowledge would also be incorporated into the LLMs and they too would be wiped out.

It quickly became obvious that the frontier model providers would eventually hire specialized knowledge workers to contribute industry know-how to the frontier models. Specialized knowledge seemed like it would be a moat for the next couple of years, but also end up being eaten away gradually.

The Remaining Moats

There were a few categories of businesses that I thought would be safe from being trivially disrupted within the next 5 years.

The first is proprietary data. Businesses that produced a lot of proprietary data as exhaust would be hard to disrupt. Large podshops like Millennium come to mind, they can collect analyst readings, detailed analysis, recommendations, and actual price changes, and use this data to fine-tune frontier models into something that was not going to be easily replicated. Any business producing proprietary data not trivially obtained by the frontier models would have a longer lease on life.

The second is regulatory friction. Businesses where other humans are a bottleneck seemed much harder to disrupt. Being able to trade in many TradFi markets meant opening broker accounts, getting licenses, signing contracts around the globe. It's easy to trade crypto, but much harder to trade iron ore in China as a non-Chinese firm. If you need a human to rubber-stamp your progress, the speed of that industry is always going to be bottlenecked by the cost and speed of that approval.

The third is authority as a service. It's not too hard now to get an agent to draft a legal opinion given a comprehensive study of the matter and the laws surrounding it. And yet we're still going to pay tens of thousands of dollars for one drafted by a lawyer, because an AI's legal opinion is worth nothing at this point in time. Smart contract audits are another example. We're probably already at a level where agents can review smart contracts as well as or better than the top decile of humans, yet most people still buy the stamp of authority from a branded firm. The opinion isn't what you're paying for. The authority behind it is.

The fourth is physical intelligence lag. Hardware moves much more slowly than software, and breaking hardware is a lot harder to fix. Physical businesses interacting with the real world are a lot less likely to be disrupted soon. That said, once hardware catches up, the same pyramid logic applies: lower-level jobs go first, then the more specialized ones.

These moats are real, but none of them are permanent. The honest read is that they buy time, not safety.

Reasoning About A Messy Future

When the future is genuinely noisy, when the rate of change is fast enough that most analogies break down, people tend to do one of two things. They either wait for certainty before acting, or they pattern-match to the past ("this is like the internet boom") and act on the wrong model. Both are mistakes.

It is worth reasoning from first principles under incomplete information. You don't need to know exactly how something plays out. You just need to be directionally correct, and you need to structure your bets so that being early and wrong is survivable, while being early and right is disproportionately rewarding.

Asymmetry is the whole game when the future is uncertain.

The practical version of this is: ask what has to be true for a given outcome to happen, and then ask how legible the inputs to that outcome already are. The inflection we're living through was not unforeseeable, the inputs were visible. Code that could write code. Models that improved recursively. Institutional knowledge that could be bought, not just grown. Anyone willing to stare at those inputs clearly could see roughly where they pointed, even without knowing the exact path.

You can recursively reason about this and extrapolate further. I don't even think we've yet caught a glimpse of what it will be like when agents can train themselves, when agents can replicate, when agents become truly autonomous. An agent that can increase its intelligence by 0.1% through a series of actions may not seem significant, but any number that is not 0 increases the probability that the next increment is greater, and so on, so forth. There are vast power laws at play here and it is worth thinking along the lines of what a future looks like under those power laws.

By the time the signal is obvious, the trade is crowded. In markets, you pay for early conviction with uncertainty. In careers and startups, the currency is the same.

So the question isn't really "what's going to happen?" The question is: "what do I already know, what direction does it point, and what's the cost of acting on it now versus waiting?"

One thing that I often see people missing is to notice that action creates information. Action does not happen in a vacuum. When you act on the world, the world replies with information. That information powers iteration. Iteration begets more informed action. That is the nature of progress.

Being still in incomplete information is decay.

Moving towards action is discovery.

Thinking About Next Steps

I knew I had a couple of years if I just wanted to milk the status quo. But a large part of me felt like if I wanted to do something, I would have to start sooner rather than later. I had always wanted to build something truly mine, and it seemed like the window to do that was quickly closing.

To be clear, I know that the largest hedge funds in the world would be fine. They have proprietary data that makes them very difficult to replace. TradFi markets are also bottlenecked by human signatures, both on a regulatory and at times even a trading front. What I do think, however, is that those largest funds will use AI to replace most of their workforce, even terminal career seats like Portfolio Managers. Not immediately, but eventually, surely.

What I felt was that I had about 4-5 years before the foundation model providers hired enough specialized talent to make being an upstart trading firm nearly impossible. In certain markets, like US equities, it already feels that way. I can't imagine how much more efficient it's going to look in just a few more years.

There was clearly not going to be space for "second best" pretty soon. I could keep working for the "best", but it seemed more aligned with my goals to strike now, in a market I had a genuine edge in, with knowledge that was not going to be trivially replicated. So, having that dawg in me, I called it quits and went all in on what eventually became @openforage.

Inflection Point

Today, it's really starting to feel like the window is visibly closing. The pace of change has stopped feeling gradual, and most people following the space are beginning to realize that what used to take months of improvement now takes weeks.

In my opinion, jobs will not vanish entirely within the next couple of years. There will always be a need for humans. Humans are social creatures, as long as humans are in charge, we want other humans around. And humans don't trust AI yet, so stamps of authority still need to come from a human. I imagine AI CEOs in the next couple of years, but there will still likely be a human CEO having to "approve" and certify the AI CEO. This idea of human certification cascades down the pyramid. A human manager will manage and certify a bunch of agents working under him.

But the arithmetic of hires will change. If a CEO can prompt an agent more easily than they can prompt you, there's no need to hire you. Shallow, code-monkey work will be very difficult to find going forward.

To be irreplaceable, you need to operate at a timescale far above current agent limitations - receiving instruction, managing agents, and working with them for weeks, months, or years. Long-term strategic thinking and policy planning is one of the strongest job moats for the foreseeable future. You also need to operate at a scope greater than current agent limitations. Agents have limited context. They know everything about anything, yet cannot trivially see how component A interacts with component B interacts with component C causing cascading effects to component D. They lack scope.

If you can think far and wide, absorb information quickly, make decisions for the long term, and are likeable, you will hold down a job, at least for the foreseeable future.

If you do intend to be an employee, it's worth taking stock of what your work is actually made of. Some tasks are deeply human defensible. Some will be replaced cheaply over the next couple of years. Do more of the former and less of the latter.

Working for a great firm in a deeply defensible position, one that sits behind real moats, may give you a career runway while the rest of the workforce gets eaten by the foundation models. You can still spend your tokens at night, rolling the dice, trying to build something meaningful.

But if you have a burning desire to contribute a unique verse to the world, think carefully about where your market of choice is heading. If your window to build something defensible is closing, you need to begin operating before the market fully prices in the competition that is coming.

Conclusion

The inputs that create inflection points are legible ahead of time, if you're willing to look. Most people don't look, or they look and don't act, or they wait until the signal is so loud that the opportunity is already priced in.

Don't ignore the shifting sands. Don't stay somewhere that's losing ground while telling yourself you'll make the leap when the timing is better. There's no better timing, and the timing rarely announces itself. When it becomes obvious to everyone, the window has normally already closed.

I looked, I made a bet, and now I'm living inside the outcome of that bet — for better or worse.

Link: http://x.com/i/article/2030935783885201408

📋 讨论归档

讨论进行中…