返回列表
🧠 阿头学 · 🪞 Uota学 · 💬 讨论题

AI 时代最值钱的技能不是 prompt engineering,是"胡扯探测器"

Ryan Holiday 用亲手验证 AI 编造的林肯引文和海军学院数据的经历证明:AI 越强,人类的批判性思维、领域专长和"闻出胡扯"的能力就越值钱。

2026-02-17 原文链接 ↗
阅读简报
双语对照
完整翻译
原文
讨论归档

核心观点

  • AI 的"自信式胡说"比无知更危险 ChatGPT 先说托尔斯泰谈狄更斯,被质疑后改口,再被追问就说引文不存在——而人工翻查 800 页传记证明引文确实存在。问题不是 AI 不知道,是它在不知道时表现得太像知道了。
  • "看起来像推理"的统计拼接是新型骗术 海军学院的例子精彩:6% 的死亡率 × 7500 毕业生 = 450 人——数学正确,逻辑荒谬。两个数字来自不同口径,被 AI 自信地拼在一起。如果你不懂领域知识,你根本看不出问题。这比明显的错误更可怕。
  • "AI slop"正在淹没一切沟通渠道 不只是社交媒体——同事邮件、企业新闻稿、记者稿件都在被 AI 代笔。"crucial""unlock""harness""a tapestry of""in today's fast-paced world"——这些 AI 花腔已经成了识别信号。
  • 纸质笔记卡不是反技术,是刻意练习 Holiday 的核心论点被包装成了"用纸卡做研究"的故事,但真正的点是:手写迫使你与材料纠缠更久、理解更深。这是 deliberate practice 的变体——慢就是快。
  • "没有人是偶然变得智慧的"——但 AI 让人以为可以 引用塞涅卡的话点睛。AI 给出 30 秒答案的能力,正在侵蚀人类愿意花 10 小时去真正理解一个问题的意愿。便捷性是智慧的敌人。

跟我们的关联

  • ATou 作为 Context Engineer,本身就是"人类胡扯探测器 + AI 操控者"的结合体。这篇文章描述的恰恰是 Context Engineer 的核心价值:不是会写 prompt,是知道什么时候 AI 在胡说。
  • Neta 的 AI 角色会不会产生"自信式胡说"?用户跟 AI 角色聊天时,角色如果编造了不存在的共同记忆或虚假事实,用户体验会严重受损。这是产品层面需要解决的信任问题。
  • 团队招人时,"领域专长 + 批判性思维"应该比"会用 AI 工具"排在更前面。会用 AI 的人到处都是,能识别 AI 胡说的人才是稀缺资源。
  • 出海做品牌时,海外用户对 AI 生成内容的敏感度越来越高——那些 AI 花腔会立刻被识别出来。Neta 的对外内容必须过"人味儿"检测。

讨论引子

1. ATou 有没有被 AI 的"自信式胡说"骗过的经历?是怎么发现的?这种经验有没有系统化地沉淀到工作流里? 2. Neta 的 AI 角色在跟用户聊天时,如何平衡"流畅自信的对话体验"和"不编造不确定信息"?这两者本质上是矛盾的——怎么取舍? 3. 如果"胡扯探测器"是 AI 时代最值钱的技能,团队应该怎么刻意练习这项能力?有没有可能设计一个"AI 生成内容 vs 人类原创内容"的内部辨识训练?

这项简单技能,让你在 AI 时代领先一步

我所有的研究都写在纸质索引卡上。

我只读纸质书。

如果我必须读一篇研究论文或文章,我会把它打印出来,用笔逐页通读、做标注。

我眼下正在写的那本书,现在就摊在一块老式的软木公告板上,上面插满了图钉。

我相信,做这些事有很多更轻松、更高效的方式。但我刻意选择更难、更低科技的做法。

不过话说回来,我并不是卢德派,也不觉得成为那样的人有什么可敬或值得称道之处。

本能地抵触并拒绝新技术,从根子上说是愚蠢的——我拒绝这样做。

我花了许多个小时研究 AI 工具和大型语言模型,看看它们能在哪些地方让我做得更好、在哪些地方能帮到我。

在某些情况下,它们确实做到了。今年夏天我们全家去希腊旅行时,我想去的地方有几十个,散落在全国各地,彼此之间没有明显的顺序,也没有路线可循。我把它们全部输入 ChatGPT,让它给出最省时的自驾路线。三十秒之内,它就生成了我自己几乎不可能独立算出来的方案,最终也让我们把想去的点都安排进了行程。

我也和孩子们一起度过了许多个快乐的清晨(以及漫长的车程),让它生成荒诞的图片,或者给我们讲故事。我们用它来做想要制作之物的模型草图,也让它用适合儿童理解的语言解释晦涩的历史概念。

但在另一些时候,我使用 AI 的经历反而让我更确信老办法的价值——比如我想核实并追溯一条关于亚伯拉罕·林肯的引文出处,那句话我早已写在一张索引卡上。ChatGPT 一开始告诉我,这根本不是林肯的事,而是托尔斯泰在谈狄更斯……我提出质疑后,它又试图说这段话出自海伊和尼科莱——林肯的两位秘书。我接着问:那我手里这本书里到底在哪一页能找到?它又说这句话其实并不存在。直到我把一本获奖的、八百页的传记逐页翻回去,我才确认:我那张手写卡片确实是对的。托尔斯泰完全没有参与(虽然他确实写过一句关于林肯的精彩评语);说这句话的是一位与林肯相熟的 19 世纪记者——而且这段引文在许多旧报纸数据库和公版书籍里都很容易找到。

更近一些,为了我正在做的一个项目,我想知道有多少美国海军学院的毕业生在第二次世界大战中丧生。值得肯定的是,ChatGPT 还展示了它的计算过程。它先告诉我:在二战中服役的海军学院毕业生里,有 6% 丧生。然后它又补充:1940 到 1945 年间,大约有 7,500 人从海军学院毕业。接着,它就由这两个数字——非常自信地——推断:大概有 450 名毕业生丧生。

当然,这看起来像是在思考,像是真正的推理。我也能看出数学计算本身没有问题。问题在于:这两个数字其实毫无关联。6% 这个比例,适用于所有实际参战的海军学院出身者;而 7,500 这个数字,说的是战争年代毕业的人数。但这根本不是问题所在,对吧?我恰好从某处读到过:大约有 54 届海军学院毕业班在二战中服役,所以用战时毕业人数去计算战时死亡人数毫无意义。这两个数字完全不相干。再说,我们为什么要估算?如果 6% 这个比例存在,就说明总数是已知的(当然是已知的,退伍军人事务部必须掌握这个统计)。

不管怎样,我真正的解决办法要低科技得多:我直接找到了一块刻着所有名字的纪念牌。

关键在于:如果我之前没有在这些领域读得足够深入——如果我大致不知道自己在找什么——我就会被糊弄过去。我可能会写下“托尔斯泰称狄更斯为历史上唯一真正的巨人”。如果我不靠自己的脑子,我也可能会被那种看似数学算式、实际上却是胡闹的东西说服。

这正是人们对 AI 的误解之处。关于我们为什么该担心 AI 让我们或某些东西变得过时,讨论很多:它会让人文学科过时;会让书籍、艺术家、知识工作者,乃至“专业”本身都过时。

但恰恰相反!要把这些工具用好——而不是被它们利用——你恰恰需要那些被告知正在过时的东西:广博的通识教育、领域专长、批判性思维;对人类真实表达的语感;以及察觉哪里不对劲的能力。

就在前几天——事实上,这篇文章写到一半的时候——我收到了一封邮件,有人向我推介某本书,希望上 The Daily Stoic 播客?发件地址看起来很正规,推介本身也算有点说服力。但整封邮件充满了那种我认识的任何人类都不会用的 AI 花腔:过度使用“至关重要”“解锁”“驾驭/利用”之类的词;还有像“……的织锦”“在当今快节奏的世界里”这样的句式;以及那些绿色对勾表情符号。

我用 AI 用得足够多,一眼就知道这封推介是 ChatGPT 或 Gemini 写的……也就意味着我可以立刻删掉它。

我们正进入一个 AI 垃圾内容的世界。不只是在社交媒体上。可悲的是,也不只是内容创作者把写作、构思、脚本、推介外包给这些工具。它无处不在:同事发来的邮件、企业的新闻稿、记者、营销人员、政客、意见领袖——你目之所及,很多人都在悄悄把 AI 的“写作”和“思考”冒充成自己的。

所以,这个时代最关键的技能不是提示词工程,也不是编程——而是一套经过精密调校的“胡扯探测器”。它意味着你要足够了解人类真实的思考与写作方式,才能识别胡扯;要读得足够广,才能在自信满满的文字外衣之下看出答案的空洞;要足够懂自己的领域,知道该问什么问题,更重要的是,知道该拒绝哪些答案。

我们需要了解 AI 如何运作、会吐出怎样的答案,才不会被那些懂得利用它的人操控。

我们需要读过足够多的托尔斯泰,才知道某句所谓的托尔斯泰语录听起来就不像托尔斯泰。

我们需要懂得足够多的历史,才能抓住那些被硬生生连接在一起、其实从未有过交集的人物或事件。

我们需要对基础统计有足够理解,才能看出两个无关数字被强行拼接,只为给你一个“答案”。

这就是我们必须愿意去做、也必须主动选择去做的工作……在我的新书 Wisdom Takes Work 里,我引用了塞涅卡的话:“没有人是偶然变得智慧的。”我们必须自己获得它。我们不能把它委托给某个人或某样东西。没有任何技术能替你完成。没有任何 app。没有任何提示词,没有任何捷径、摘要或按部就班的公式。没有任何大语言模型能在三十秒里把它吐给你。

前些时候,我问罗伯特·格林怎么看 AI。“我会想起我 19 岁上大学的时候,”罗伯特说。在一门学习阅读并翻译古希腊文本的课上,“他们给了我们一段修昔底德的文字——在古希腊语里,他是最难读的作家。光那一段,我肯定花了十个小时琢磨翻译……那对我影响巨大。它磨炼了我的品格、耐心和自律,直到今天都在帮我。要是当时我有 ChatGPT,把那段文字扔进去,它立刻给我翻译出来呢?整个思考过程当场就会被彻底抹去。”

这就是为什么我所有的研究都写在纸质索引卡上。它不快,不轻松,也不高效。而这正是重点。用手写下来,会迫使我在更长的时间里与材料纠缠、较劲。它迫使我慢下来。一遍又一遍地回看。沉浸其中。专注、耐心、自律。直到真正理解得更深。

AI 这项前沿技术的讽刺之处在于:它反而让最古老的技能比以往任何时候都更值钱。阅读。思考。掌握知识。品味。理解语境。识别谎言或胡说八道。

机器越来越擅长把自己说得很聪明。

这意味着,我们必须更擅长真正变得聪明。

我们需要判断力,把信号从噪音里分离出来。

我们需要辨别力,知道哪里有点不对劲。

我们需要好奇心,不满足于第一个答案。

我们需要耐心与自律。

我们需要智慧。

比以往任何时候都更需要。

链接:http://x.com/i/article/2021996109342638080

相关笔记

I do all my research on physical notecards.

我所有的研究都写在纸质索引卡上。

I only read physical books.

我只读纸质书。

If I have to read a research paper or an article, I print it out and go through it with a pen.

如果我必须读一篇研究论文或文章,我会把它打印出来,用笔逐页通读、做标注。

The book I am working on now is currently laid out on an old school cork bulletin board covered in push pins.

我眼下正在写的那本书,现在就摊在一块老式的软木公告板上,上面插满了图钉。

There are many easier and more efficient ways to do all this, I’m sure. But I do it the more difficult and low-tech way on purpose.

我相信,做这些事有很多更轻松、更高效的方式。但我刻意选择更难、更低科技的做法。

That being said, I am not a luddite and I don’t think there’s anything admirable or impressive about being one.

不过话说回来,我并不是卢德派,也不觉得成为那样的人有什么可敬或值得称道之处。

There is something fundamentally foolish about instinctively resisting and rejecting new technology—and I refuse to do it.

本能地抵触并拒绝新技术,从根子上说是愚蠢的——我拒绝这样做。

I have spent many hours trying to figure out AI tools and large language models, seeing where they can make me better, where they might help me.

我花了许多个小时研究 AI 工具和大型语言模型,看看它们能在哪些地方让我做得更好、在哪些地方能帮到我。

In some cases, they have. On our family trip to Greece this summer, I had dozens of places I wanted to visit, scattered across the country with no obvious order or itinerary to route between them. I fed them all to ChatGPT and asked for the most efficient driving route. In thirty seconds, it produced what would have been extraordinarily difficult for me to figure out on my own and ultimately, allowed us to get everything into the trip that we wanted.

在某些情况下,它们确实做到了。今年夏天我们全家去希腊旅行时,我想去的地方有几十个,散落在全国各地,彼此之间没有明显的顺序,也没有路线可循。我把它们全部输入 ChatGPT,让它给出最省时的自驾路线。三十秒之内,它就生成了我自己几乎不可能独立算出来的方案,最终也让我们把想去的点都安排进了行程。

I’ve spent many joyous mornings (and long car rides) with my kids getting it to render ridiculous pictures or tell us stories. We’ve used it to make mockups of things we want to build and had it explain obscure historical concepts in language appropriate for a child.

我也和孩子们一起度过了许多个快乐的清晨(以及漫长的车程),让它生成荒诞的图片,或者给我们讲故事。我们用它来做想要制作之物的模型草图,也让它用适合儿童理解的语言解释晦涩的历史概念。

But in other cases, my use of AI has reassured me of the value of the old techniques, like when I tried to confirm and source a quote about Abraham Lincoln that I had written down on one of my notecards. ChatGPT first told me it wasn’t about Lincoln at all, instead it was Tolstoy speaking of Dickens…and then when I pushed back, it then tried to tell me it was from Hay and Nicolay, two of Lincoln’s secretaries. When I asked what page I could find this on then—my copy in hand—it then told me that the quote didn’t actually exist. Only when I went back through, page by page, an eight-hundred-page prizewinning biography was I able to confirm that my handwritten note card had in fact been correct. Tolstoy was not involved at all (although he has a great line about Lincoln), it was a 19th century journalist who had known Lincoln well—and the quote was easily findable in many old newspaper databases and public domain books

但在另一些时候,我使用 AI 的经历反而让我更确信老办法的价值——比如我想核实并追溯一条关于亚伯拉罕·林肯的引文出处,那句话我早已写在一张索引卡上。ChatGPT 一开始告诉我,这根本不是林肯的事,而是托尔斯泰在谈狄更斯……我提出质疑后,它又试图说这段话出自海伊和尼科莱——林肯的两位秘书。我接着问:那我手里这本书里到底在哪一页能找到?它又说这句话其实并不存在。直到我把一本获奖的、八百页的传记逐页翻回去,我才确认:我那张手写卡片确实是对的。托尔斯泰完全没有参与(虽然他确实写过一句关于林肯的精彩评语);说这句话的是一位与林肯相熟的 19 世纪记者——而且这段引文在许多旧报纸数据库和公版书籍里都很容易找到。

More recently, for a project I’m currently working on, I wanted to know how many U.S Naval Academy graduates died in World War II. To its credit, ChatGPT showed its work. First it told me that 6% of Naval Academy graduates who served in World War II died. Then it added that between 1940 and 1945, approximately 7,500 people graduated from the Naval Academy. And from those two numbers, it concluded—very confidently—that about 450 graduates must have died.

更近一些,为了我正在做的一个项目,我想知道有多少美国海军学院的毕业生在第二次世界大战中丧生。值得肯定的是,ChatGPT 还展示了它的计算过程。它先告诉我:在二战中服役的海军学院毕业生里,有 6% 丧生。然后它又补充:1940 到 1945 年间,大约有 7,500 人从海军学院毕业。接着,它就由这两个数字——非常自信地——推断:大概有 450 名毕业生丧生。

Of course, that looks like thinking. It looks like real reasoning. And I could see the math was correct. The problem is that these numbers actually had nothing to do with each other. The 6% figure applies to everyone from the Academy who actually served in the war. The 7,500 figure is how many people graduated during the war years. But that wasn’t the question, was it? I happened to know from something I’d read that around 54 Academy classes served in World War II so using the wartime graduation count to calculate wartime deaths makes no sense. The two numbers are totally unrelated. Also, why are we estimating at all? If the 6% figure exists, that means that the total is a known figure (and of course it is, the Veteran’s Affairs have to know this statistic).

当然,这看起来像是在思考,像是真正的推理。我也能看出数学计算本身没有问题。问题在于:这两个数字其实毫无关联。6% 这个比例,适用于所有实际参战的海军学院出身者;而 7,500 这个数字,说的是战争年代毕业的人数。但这根本不是问题所在,对吧?我恰好从某处读到过:大约有 54 届海军学院毕业班在二战中服役,所以用战时毕业人数去计算战时死亡人数毫无意义。这两个数字完全不相干。再说,我们为什么要估算?如果 6% 这个比例存在,就说明总数是已知的(当然是已知的,退伍军人事务部必须掌握这个统计)。

In any case, my actual solution was much more low tech. I just found a plaque that listed all the names.

不管怎样,我真正的解决办法要低科技得多:我直接找到了一块刻着所有名字的纪念牌。

The point is: If I hadn’t already read deeply in these areas—had I not known roughly what I was looking for—I would have been fooled. I might have written that Tolstoy called Dickens the only real giant of history. If I didn’t have my own brain, I might have been persuaded by what seemed like a math equation but was in fact, nonsense.

关键在于:如果我之前没有在这些领域读得足够深入——如果我大致不知道自己在找什么——我就会被糊弄过去。我可能会写下“托尔斯泰称狄更斯为历史上唯一真正的巨人”。如果我不靠自己的脑子,我也可能会被那种看似数学算式、实际上却是胡闹的东西说服。

This is what people miss about AI. There’s a lot of talk about why we should be worried about AI making us or certain things obsolete. It’s going to make the humanities obsolete. It’s going to make books, artists, knowledge workers, and expertise itself obsolete.

这正是人们对 AI 的误解之处。关于我们为什么该担心 AI 让我们或某些东西变得过时,讨论很多:它会让人文学科过时;会让书籍、艺术家、知识工作者,乃至“专业”本身都过时。

But the opposite is true! To use these tools well—to not be used by them—you need exactly the things we’re told are becoming obsolete. A broad liberal arts education. Domain expertise. Critical thinking. A feel for what humans actually sound like. The ability to spot when something seems off.

但恰恰相反!要把这些工具用好——而不是被它们利用——你恰恰需要那些被告知正在过时的东西:广博的通识教育、领域专长、批判性思维;对人类真实表达的语感;以及察觉哪里不对劲的能力。

Just the other day—while this article was in progress, actually—I got an email from someone pitching me some book for The Daily Stoic podcast? The email address was legitimate. The pitch itself was somewhat compelling. But it was riddled with those AI flourishes that no human I know would ever use. An overuse of words like “crucial,” “unlock,” and “harness.” Phrases like “a tapestry of” and “in today’s fast-paced world.” And those green checkmark emojis.

就在前几天——事实上,这篇文章写到一半的时候——我收到了一封邮件,有人向我推介某本书,希望上 The Daily Stoic 播客?发件地址看起来很正规,推介本身也算有点说服力。但整封邮件充满了那种我认识的任何人类都不会用的 AI 花腔:过度使用“至关重要”“解锁”“驾驭/利用”之类的词;还有像“……的织锦”“在当今快节奏的世界里”这样的句式;以及那些绿色对勾表情符号。

I’ve used AI enough to know that ChatGPT or Gemini wrote this pitch…which meant I could promptly delete it.

我用 AI 用得足够多,一眼就知道这封推介是 ChatGPT 或 Gemini 写的……也就意味着我可以立刻删掉它。

We’re entering a world of AI slop. Not just on social media. It’s not just content creators who are sadly outsourcing their writing and ideating and scripting and pitching to these tools. It’s everywhere. Emails from coworkers. Press releases from corporations. Journalists, marketers, politicians, thought leaders—everywhere you look, people are quietly passing off AI’s “writing” and “thinking” as their own.

我们正进入一个 AI 垃圾内容的世界。不只是在社交媒体上。可悲的是,也不只是内容创作者把写作、构思、脚本、推介外包给这些工具。它无处不在:同事发来的邮件、企业的新闻稿、记者、营销人员、政客、意见领袖——你目之所及,很多人都在悄悄把 AI 的“写作”和“思考”冒充成自己的。

So the essential skill of our time isn’t prompt engineering or coding—it’s having a finely tuned bullshit detector. It’s knowing enough about how humans actually think and write to spot bullshit. It’s having read widely enough to recognize when an answer is hollow, even when it’s dressed up in confident prose. It’s understanding your domain well enough to know what questions to ask and, more importantly, which answers to reject.

所以,这个时代最关键的技能不是提示词工程,也不是编程——而是一套经过精密调校的“胡扯探测器”。它意味着你要足够了解人类真实的思考与写作方式,才能识别胡扯;要读得足够广,才能在自信满满的文字外衣之下看出答案的空洞;要足够懂自己的领域,知道该问什么问题,更重要的是,知道该拒绝哪些答案。

We need to know how AI works and what kind of answers it spits out so you don’t get manipulated by people who do.

我们需要了解 AI 如何运作、会吐出怎样的答案,才不会被那些懂得利用它的人操控。

We need to have read enough Tolstoy to know when a Tolstoy quote doesn’t sound like Tolstoy.

我们需要读过足够多的托尔斯泰,才知道某句所谓的托尔斯泰语录听起来就不像托尔斯泰。

We need to know enough history to catch when two figures or events are being linked that never overlapped.

我们需要懂得足够多的历史,才能抓住那些被硬生生连接在一起、其实从未有过交集的人物或事件。

We need to understand basic statistics well enough to spot when two unrelated numbers are being jammed together just to give you an answer.

我们需要对基础统计有足够理解,才能看出两个无关数字被强行拼接,只为给你一个“答案”。

This is the kind of work we have to be willing to do…that we have to choose to do. In the new book, Wisdom Takes Work, I quote Seneca, “No man was ever wise by chance.” We must get it ourselves. We cannot delegate it to someone or something else. There is no technology that can do it for you. There is no app. There is no prompt, no shortcut or summary or step-by-step formula. There is no LLM that can spit it out in thirty seconds.

这就是我们必须愿意去做、也必须主动选择去做的工作……在我的新书 Wisdom Takes Work 里,我引用了塞涅卡的话:“没有人是偶然变得智慧的。”我们必须自己获得它。我们不能把它委托给某个人或某样东西。没有任何技术能替你完成。没有任何 app。没有任何提示词,没有任何捷径、摘要或按部就班的公式。没有任何大语言模型能在三十秒里把它吐给你。

A little while back, I asked Robert Greene what he thought about AI. “I think back to when I was 19-years-old and in college,” Robert said. In a class learning to read and translate classical Greek texts, “They gave us a passage of Thucydides, the hardest writer of all to read in ancient Greek. I had this one paragraph I must have spent ten hours trying to translate…That had an incredible impact on me. It developed character, patience, and discipline that helps me even to this day. What if I had ChatGPT, and I put the passage in there, and it gave me the translation right away? The whole thinking process would have been annihilated right there.”

前些时候,我问罗伯特·格林怎么看 AI。“我会想起我 19 岁上大学的时候,”罗伯特说。在一门学习阅读并翻译古希腊文本的课上,“他们给了我们一段修昔底德的文字——在古希腊语里,他是最难读的作家。光那一段,我肯定花了十个小时琢磨翻译……那对我影响巨大。它磨炼了我的品格、耐心和自律,直到今天都在帮我。要是当时我有 ChatGPT,把那段文字扔进去,它立刻给我翻译出来呢?整个思考过程当场就会被彻底抹去。”

This is why I do all my research on physical notecards. It is not fast, easy, or efficient. And that is the point. Writing things down by hand forces me to engage and struggle with the material for an extended period of time. It forces me to take my time. To go over things again and again. To be immersed. To be focused, patient, and disciplined. To come to understand things deeply.

这就是为什么我所有的研究都写在纸质索引卡上。它不快,不轻松,也不高效。而这正是重点。用手写下来,会迫使我在更长的时间里与材料纠缠、较劲。它迫使我慢下来。一遍又一遍地回看。沉浸其中。专注、耐心、自律。直到真正理解得更深。

The irony of AI, this cutting-edge technology, is that it makes the oldest skills more valuable than ever. Reading. Thinking. Knowing things. Having taste. Understanding context. Detecting lies or nonsense.

AI 这项前沿技术的讽刺之处在于:它反而让最古老的技能比以往任何时候都更值钱。阅读。思考。掌握知识。品味。理解语境。识别谎言或胡说八道。

The machines are getting better at sounding smart.

机器越来越擅长把自己说得很聪明。

Which means we need to get better at actually becoming smart.

这意味着,我们必须更擅长真正变得聪明。

We need the judgment to separate signal from noise.

我们需要判断力,把信号从噪音里分离出来。

We need the discernment to know something seems a little off.

我们需要辨别力,知道哪里有点不对劲。

We need the curiosity to not be satisfied with first answers.

我们需要好奇心,不满足于第一个答案。

We need patience and discipline.

我们需要耐心与自律。

We need wisdom.

我们需要智慧。

Now more than ever.

比以往任何时候都更需要。

Link: http://x.com/i/article/2021996109342638080

链接:http://x.com/i/article/2021996109342638080

相关笔记

This Simple Skill Will Put You Ahead In The Era Of AI

  • Source: https://x.com/ryanholiday/status/2022732708112212212?s=46
  • Mirror: https://x.com/ryanholiday/status/2022732708112212212?s=46
  • Published: 2026-02-14T18:00:32+00:00
  • Saved: 2026-02-17

Content

I do all my research on physical notecards.

I only read physical books.

If I have to read a research paper or an article, I print it out and go through it with a pen.

The book I am working on now is currently laid out on an old school cork bulletin board covered in push pins.

There are many easier and more efficient ways to do all this, I’m sure. But I do it the more difficult and low-tech way on purpose.

That being said, I am not a luddite and I don’t think there’s anything admirable or impressive about being one.

There is something fundamentally foolish about instinctively resisting and rejecting new technology—and I refuse to do it.

I have spent many hours trying to figure out AI tools and large language models, seeing where they can make me better, where they might help me.

In some cases, they have. On our family trip to Greece this summer, I had dozens of places I wanted to visit, scattered across the country with no obvious order or itinerary to route between them. I fed them all to ChatGPT and asked for the most efficient driving route. In thirty seconds, it produced what would have been extraordinarily difficult for me to figure out on my own and ultimately, allowed us to get everything into the trip that we wanted.

I’ve spent many joyous mornings (and long car rides) with my kids getting it to render ridiculous pictures or tell us stories. We’ve used it to make mockups of things we want to build and had it explain obscure historical concepts in language appropriate for a child.

But in other cases, my use of AI has reassured me of the value of the old techniques, like when I tried to confirm and source a quote about Abraham Lincoln that I had written down on one of my notecards. ChatGPT first told me it wasn’t about Lincoln at all, instead it was Tolstoy speaking of Dickens…and then when I pushed back, it then tried to tell me it was from Hay and Nicolay, two of Lincoln’s secretaries. When I asked what page I could find this on then—my copy in hand—it then told me that the quote didn’t actually exist. Only when I went back through, page by page, an eight-hundred-page prizewinning biography was I able to confirm that my handwritten note card had in fact been correct. Tolstoy was not involved at all (although he has a great line about Lincoln), it was a 19th century journalist who had known Lincoln well—and the quote was easily findable in many old newspaper databases and public domain books

More recently, for a project I’m currently working on, I wanted to know how many U.S Naval Academy graduates died in World War II. To its credit, ChatGPT showed its work. First it told me that 6% of Naval Academy graduates who served in World War II died. Then it added that between 1940 and 1945, approximately 7,500 people graduated from the Naval Academy. And from those two numbers, it concluded—very confidently—that about 450 graduates must have died.

Of course, that looks like thinking. It looks like real reasoning. And I could see the math was correct. The problem is that these numbers actually had nothing to do with each other. The 6% figure applies to everyone from the Academy who actually served in the war. The 7,500 figure is how many people graduated during the war years. But that wasn’t the question, was it? I happened to know from something I’d read that around 54 Academy classes served in World War II so using the wartime graduation count to calculate wartime deaths makes no sense. The two numbers are totally unrelated. Also, why are we estimating at all? If the 6% figure exists, that means that the total is a known figure (and of course it is, the Veteran’s Affairs have to know this statistic).

In any case, my actual solution was much more low tech. I just found a plaque that listed all the names.

The point is: If I hadn’t already read deeply in these areas—had I not known roughly what I was looking for—I would have been fooled. I might have written that Tolstoy called Dickens the only real giant of history. If I didn’t have my own brain, I might have been persuaded by what seemed like a math equation but was in fact, nonsense.

This is what people miss about AI. There’s a lot of talk about why we should be worried about AI making us or certain things obsolete. It’s going to make the humanities obsolete. It’s going to make books, artists, knowledge workers, and expertise itself obsolete.

But the opposite is true! To use these tools well—to not be used by them—you need exactly the things we’re told are becoming obsolete. A broad liberal arts education. Domain expertise. Critical thinking. A feel for what humans actually sound like. The ability to spot when something seems off.

Just the other day—while this article was in progress, actually—I got an email from someone pitching me some book for The Daily Stoic podcast? The email address was legitimate. The pitch itself was somewhat compelling. But it was riddled with those AI flourishes that no human I know would ever use. An overuse of words like “crucial,” “unlock,” and “harness.” Phrases like “a tapestry of” and “in today’s fast-paced world.” And those green checkmark emojis.

I’ve used AI enough to know that ChatGPT or Gemini wrote this pitch…which meant I could promptly delete it.

We’re entering a world of AI slop. Not just on social media. It’s not just content creators who are sadly outsourcing their writing and ideating and scripting and pitching to these tools. It’s everywhere. Emails from coworkers. Press releases from corporations. Journalists, marketers, politicians, thought leaders—everywhere you look, people are quietly passing off AI’s “writing” and “thinking” as their own.

So the essential skill of our time isn’t prompt engineering or coding—it’s having a finely tuned bullshit detector. It’s knowing enough about how humans actually think and write to spot bullshit. It’s having read widely enough to recognize when an answer is hollow, even when it’s dressed up in confident prose. It’s understanding your domain well enough to know what questions to ask and, more importantly, which answers to reject.

We need to know how AI works and what kind of answers it spits out so you don’t get manipulated by people who do.

We need to have read enough Tolstoy to know when a Tolstoy quote doesn’t sound like Tolstoy.

We need to know enough history to catch when two figures or events are being linked that never overlapped.

We need to understand basic statistics well enough to spot when two unrelated numbers are being jammed together just to give you an answer.

This is the kind of work we have to be willing to do…that we have to choose to do. In the new book, Wisdom Takes Work, I quote Seneca, “No man was ever wise by chance.” We must get it ourselves. We cannot delegate it to someone or something else. There is no technology that can do it for you. There is no app. There is no prompt, no shortcut or summary or step-by-step formula. There is no LLM that can spit it out in thirty seconds.

A little while back, I asked Robert Greene what he thought about AI. “I think back to when I was 19-years-old and in college,” Robert said. In a class learning to read and translate classical Greek texts, “They gave us a passage of Thucydides, the hardest writer of all to read in ancient Greek. I had this one paragraph I must have spent ten hours trying to translate…That had an incredible impact on me. It developed character, patience, and discipline that helps me even to this day. What if I had ChatGPT, and I put the passage in there, and it gave me the translation right away? The whole thinking process would have been annihilated right there.”

This is why I do all my research on physical notecards. It is not fast, easy, or efficient. And that is the point. Writing things down by hand forces me to engage and struggle with the material for an extended period of time. It forces me to take my time. To go over things again and again. To be immersed. To be focused, patient, and disciplined. To come to understand things deeply.

The irony of AI, this cutting-edge technology, is that it makes the oldest skills more valuable than ever. Reading. Thinking. Knowing things. Having taste. Understanding context. Detecting lies or nonsense.

The machines are getting better at sounding smart.

Which means we need to get better at actually becoming smart.

We need the judgment to separate signal from noise.

We need the discernment to know something seems a little off.

We need the curiosity to not be satisfied with first answers.

We need patience and discipline.

We need wisdom.

Now more than ever.

Link: http://x.com/i/article/2021996109342638080

📋 讨论归档

讨论进行中…