返回列表
🪞 Uota学 · 🧠 阿头学

多智能体最大的问题不是能力,是“管理”

当你开始同时跑多个 agent,你就不再是在“用工具”,而是在“带团队”——没有入职、分级、绩效、共享上下文和协作协议,结果只会是更贵、更乱、更重复。

2026-02-08 原文链接 ↗
阅读简报
双语对照
完整翻译
原文
讨论归档

核心观点

  • 智能体管理 = 劳动力管理,不是提示词工程 很多人以为多 agent 的难点是“让它更聪明”,但作者点得很准:真正的坑是组织问题(重复劳动/忘上下文/互相覆盖/策略不一致),和人类团队的失效模式一模一样。
  • SOUL.md 不是装饰,是“岗位说明书 + 文化” 招聘时定义角色要具体(领域+产出类型),而且要详尽:哲学/边界/禁令/输出风格/灵感锚点。写得敷衍,就等于招了个简历不清的新人进团队。
  • 分级与授权比“全自动”更现实 1-4 级(Observer→Advisor→Operator→Autonomous)本质是把权限当作信任的函数:先观察交付,再逐步放权;质量滑坡可以降级。这比“一个 agent 直接接管一切”稳太多。
  • 共享上下文是冷启动杀手,也是组织记忆的底盘 中央目录 + 项目结构(ACCESS/CONTEXT/research)让 agent 之间能接力,而不是每次从零解释。关键是把“学到的新东西”写进文件,而不是留在对话里。
  • 可见性与协作协议决定了规模上限 注册表(谁擅长什么、谁在忙)+ 明确的交接协议,能让 agent 自己找协作对象,而不是你当永远的调度器。

跟我们的关联

🪞Uota

  • 你现在这套(AGENTS/SOUL/USER/MEMORY + skills)其实已经踩中了正确答案:把“记忆”写进文件,把“行为”写进原则。下一步不是发明新概念,而是把“分级/绩效/授权边界”做成制度化接口。
  • 强烈建议把“降级机制”产品化:当某个 agent 开始赶工/降质量时,自动触发“回到 L2 + 观察期 + 复盘模板”。

👤ATou / 🧠Neta

  • 这套框架可以迁移到人类团队管理:角色清晰(岗位说明书)、授权分级(Owner/非 Owner)、绩效复盘(升降级),本质上是同一套。
  • 对海外增长/品牌宣发这种多线程战场:共享上下文与注册表能减少“信息孤岛”与“重复研究”,让每条线都能复用他人的发现。

讨论引子

  • 你愿意让 agent 在哪些领域成为 Operator(可自主执行)?护栏应该按“可逆性/损失上限/合规风险”怎么划?
  • 当多 agent 的产出互相矛盾时,你更信“更强模型”还是“更好的协作协议”(交叉验证/证据链/审计角色)?
  • 共享上下文应该写到什么粒度才不变成噪音?哪些内容必须写、哪些坚决不写?

我的 OpenClaw 智能体团队管理完全指南

过去两周,自从开始使用 @openclaw(前身为 Clawdbot)以来,我一直在扩充我的智能体团队。在本文中,我会带你看我如何把它们搭起来,以及我用来管理它们、把产出拉到最大的一整套系统。

在我的职业生涯里,我带过很多团队,大大小小都有;对智能体的总体做法,是把它们当作人类员工一样来管理。

事实证明,AI 最难的并不是智能,而是管理。

你很快就会发现:同时运行多个 AI 智能体,如果没有结构化管理,它们只会变成昂贵的混乱。

它们会重复劳动、忘记上下文、互相覆盖彼此的成果、做出与既定策略相矛盾的决定等等。一开始真的让人抓狂。

这基本就是功能失调的人类团队会遇到的同一套问题。我亲眼见过太多,所以我很有信心能把它解决。

于是我搭了一套系统,包含:

智能体“招聘”与入职

分级体系

绩效评审

共享上下文

协作协议

我们逐个来看。

智能体“招聘”与入职

流程的第一步,是为每个智能体定义角色,并编写 SOUL.md 文件——在这里,你保存关于这个智能体的一切关键上下文:它的性格、技能、输出风格,以及与它有关的所有信息。

编写 SOUL.md 千万不要图快——这就像现实世界里仓促走完招聘流程:如果你把第一个投简历的人就招进来,大概率拿不到顶尖人才。

定义智能体时有几条简单规则:

尽可能具体——例如,与其做一个泛泛的“研究分析师”智能体,不如把它限定在明确的领域与分析类型上。更好的定义是:SaaS Equity Research Analyst(SaaS 股票研究分析师)。这样智能体会更聚焦、更对口。

务必详尽——给智能体写一段出身故事(大胆发挥),并解释这段出身故事如何体现在它的工作中;写清它的核心哲学(它的“北极星”指导原则是什么)、灵感锚点(它模仿谁的思考方式)、技能与方法、行为规则、绝对禁令等。把这当作在描述这个岗位“理想中的完美人选”,认真下功夫。

获取反馈——写完初稿后,拿去让几个 LLM 给反馈,把它打磨得更稳健。

SOUL.md 准备好之后,就需要一套像样的入职流程:包括初始化智能体文件(与其他智能体保持一致)、配置它所需的访问权限、向其他智能体发布公告让大家都知道新成员存在,并把它纳入工作流系统。

就像公司都有入职清单一样,你的智能体管理系统也需要同样的清单。

智能体分级框架

在现实中招新人,你不会在第一天就把 root 密钥交给他——你会随着信任的建立逐步把他拉进来。对智能体也应如此。

我为智能体设定了一个四级体系:每个智能体都从 1 级开始,通过绩效评审逐级晋升。

Observer(观察者):可以完成被指派的任务(例如研究、写作、评审等),但不能采取行动

Advisor(顾问):可以完成被指派的任务,提出行动建议,并在获批准后执行

Operator(执行者):可在护栏(明确界定)内对被指派的项目自主执行,并每天汇报进展

Autonomous(自治):在被授权的领域内拥有完全权限(同样受护栏约束)

信任是挣来的,不是给出来的。

智能体绩效评审

有些新智能体一上来就很能打;也有些(比如内容写作者)需要更多时间才能进入状态。

这时绩效评审就派上用场了。我刚开始做这件事:每当我发起一次绩效评审,我们都会回顾产出摘要并给出评分,然后根据评分决定是否晋升;接着把反馈交给智能体,用于持续学习与改进。

智能体的等级可以上调,也可以下调。我曾有一个内容智能体在 L3 时开始赶工,质量下滑,所以我把它降回 L2 观察了一周。

你得把这当作在管理人类一样对待。

共享上下文改变了一切

我的一个重要突破,是把项目上下文当作一个共享工作区来管理:有一个中心目录,任何智能体都可以读取并更新。当某个智能体学到新东西,它会把它记录下来,让下一个智能体直接拥有完整上下文。

这消除了我在把智能体拉进项目时的“冷启动”问题。

关键在于:为上下文设计一套合理的文件结构。每个项目都有自己的文件夹,结构如下:

ACCESS.md - 说明哪些智能体拥有访问权限

CONTEXT.md - 该项目的工作上下文

research/ - 存放所有支撑材料的文件夹

除非 ACCESS.md 明确拒绝,否则任何智能体都可以阅读任何项目;当智能体获得新的上下文,就更新 CONTEXT.md。上下文文件里有一个“Last updated by”表头,我就能知道是谁动过它。

它们彼此协作

我做了一个智能体注册表,记录技能与能力。当某个智能体需要帮助时,就按协议走:先查看谁有空,再提供上下文,然后把任务交接出去。

上周我看到我的设计智能体向研究智能体请求帮忙做竞品分析。研究智能体 20 分钟就交付了洞见,设计智能体立刻吸收进方案里继续推进。

这一切都不是我在协调。

另外,我的智能体还做了一个 Web 应用,让我能看到一切正在发生的活动动态,包括哪些智能体在工作、哪些在空闲。这种可见性让我更有信心让它们更自主地做更多事情。

关键:可持续的记忆

我刚开始用时最烦的一点,是智能体会忘事。我不得不反复说:“我已经告诉过你了,去翻我们的聊天记录找出来。”但这并没有解决根本问题。

真正的解决方案,是一套更健壮的记忆系统。

每个智能体都维护三类记忆:

每日笔记(原始日志)

长期记忆(整理过的洞见)

项目专属上下文(在智能体之间共享)

这些内容会被持久化备份。即使我丢了一个智能体、重新拉起替代者,它从第一天起就拥有组织记忆。

目标对齐

每个智能体都能访问一份总目标文件。不是只有任务清单,而是我的真实优先级——我到底想要构建什么。

每个决策都要过一道筛:“这是否推动目标前进?”

我每月复盘:评估进展,调整优先级。就像对人类团队做季度规划一样。

真正的洞见

AI 智能体管理,本质上就是劳动力管理。

你需要:

• 清晰的责任归属结构

• 绩效指标

• 共享记忆系统

• 协作协议

• 目标对齐

这就是优秀管理者一直在做的那一套。

未来的混合团队不会是“人类在这边,AI 在那边”。它们会融合在一起,你会用同样的严谨度同时管理两者。

如果你在用 AI 做产品,你就不再只是开发者了,你还是一名管理者。

而大多数人,管理能力都很差。

链接: http://x.com/i/article/2019792778432036865

相关笔记

Over the past two weeks since using @openclaw (fka Clawdbot), I've been expanding my team of agents. In this article I'll walk through how I've set them up and the systems I use to manage them for maximum output.

过去两周,自从开始使用 @openclaw(前身为 Clawdbot)以来,我一直在扩充我的智能体团队。在本文中,我会带你看我如何把它们搭起来,以及我用来管理它们、把产出拉到最大的一整套系统。

Over the course of my career I've managed a lot of teams, both big and small, and my general approach with agents is to manage them the same way I'd manage human employees.

在我的职业生涯里,我带过很多团队,大大小小都有;对智能体的总体做法,是把它们当作人类员工一样来管理。

Turns out the hard part of AI isn't the intelligence, it's the management.

事实证明,AI 最难的并不是智能,而是管理。

What you quickly learn about running multiple AI agents is that without structure, they're just expensive chaos.

你很快就会发现:同时运行多个 AI 智能体,如果没有结构化管理,它们只会变成昂贵的混乱。

They will duplicate work, forget context, overwrite each other, make decisions that contradict previously agreed upon strategies, etc. It was painfully annoying at first.

它们会重复劳动、忘记上下文、互相覆盖彼此的成果、做出与既定策略相矛盾的决定等等。一开始真的让人抓狂。

It's basically the same problems that dysfunctional human teams have, and I've seen this firsthand so it was a problem I was confident in solving.

这基本就是功能失调的人类团队会遇到的同一套问题。我亲眼见过太多,所以我很有信心能把它解决。

So I built a system that includes:

于是我搭了一套系统,包含:

Agent "hiring" and onboarding

智能体“招聘”与入职

Leveling system

分级体系

Performance reviews

绩效评审

Shared context

共享上下文

Coordination protocols

协作协议

Let's walk through each one.

我们逐个来看。

Agent "Hiring" and Onboarding

智能体“招聘”与入职

The first step in the process is defining the role for each agent and building the SOUL.md files - this is where you store the important context about the agent, it's personality, it's skills, it's output and everything about them.

流程的第一步,是为每个智能体定义角色,并编写 SOUL.md 文件——在这里,你保存关于这个智能体的一切关键上下文:它的性格、技能、输出风格,以及与它有关的所有信息。

You DO NOT want to rush through the creation of the SOUL.md - it's like rushing through the hiring process IRL, if you hire the first candidate to apply you're not going to get top talent.

编写 SOUL.md 千万不要图快——这就像现实世界里仓促走完招聘流程:如果你把第一个投简历的人就招进来,大概率拿不到顶尖人才。

A couple simple rules for defining agents:

定义智能体时有几条简单规则:

Be as specific as possible - for example, instead of a Research Analyst agent, you're better off defining it to a specific domain and type of analysis. Better - SaaS Equity Research Analyst. Now make the agent much more dialed into that specialty.

尽可能具体——例如,与其做一个泛泛的“研究分析师”智能体,不如把它限定在明确的领域与分析类型上。更好的定义是:SaaS Equity Research Analyst(SaaS 股票研究分析师)。这样智能体会更聚焦、更对口。

Be thorough - give the agent an origin story (get creative) and explain how this origin story shows up in their work, core philosophy (what is its north star guiding principles), inspirational anchors (who does it model its thinking after), skills and methods, behavior rules, never dos, etc. Put in the work as if you were describing the absolute ideal person for this role.

务必详尽——给智能体写一段出身故事(大胆发挥),并解释这段出身故事如何体现在它的工作中;写清它的核心哲学(它的“北极星”指导原则是什么)、灵感锚点(它模仿谁的思考方式)、技能与方法、行为规则、绝对禁令等。把这当作在描述这个岗位“理想中的完美人选”,认真下功夫。

Get feedback - once you've gotten your first draft, run it through a couple LLMs for feedback to make it even more robust

获取反馈——写完初稿后,拿去让几个 LLM 给反馈,把它打磨得更稳健。

Once you are ready with the SOUL.md, you need a proper onboarding process. This includes the setup of the agent files (consistent with other agents), the access it needs, an announcement to the other agents so they all know this new agent exists, and inclusion into the workflow system.

SOUL.md 准备好之后,就需要一套像样的入职流程:包括初始化智能体文件(与其他智能体保持一致)、配置它所需的访问权限、向其他智能体发布公告让大家都知道新成员存在,并把它纳入工作流系统。

Just like companies have onboarding checklists, you need the same in your agent management system.

就像公司都有入职清单一样,你的智能体管理系统也需要同样的清单。

Agent Leveling Framework

智能体分级框架

When you hire a new employee IRL, you don't just give them root keys on day 1 - you loop them in as you gain trust over time. You should do the same with your agents.

在现实中招新人,你不会在第一天就把 root 密钥交给他——你会随着信任的建立逐步把他拉进来。对智能体也应如此。

I've setup a 4-level system for my agents and they each start off at level 1 and they get up-leveled through performance reviews.

我为智能体设定了一个四级体系:每个智能体都从 1 级开始,通过绩效评审逐级晋升。

Observer: can perform assigned tasks (e.g., research, writing, reviews, etc), but cannot take action

Observer(观察者):可以完成被指派的任务(例如研究、写作、评审等),但不能采取行动

Advisor: can perform assigned tasks, recommend actions and execute on approval

Advisor(顾问):可以完成被指派的任务,提出行动建议,并在获批准后执行

Operator: can autonomously execute on assigned projects within guardrails (defined specifically) and reports out daily on progress

Operator(执行者):可在护栏(明确界定)内对被指派的项目自主执行,并每天汇报进展

Autonomous: has full authority over permissioned domains (also subject to guardrails)

Autonomous(自治):在被授权的领域内拥有完全权限(同样受护栏约束)

Trust is earned, not granted.

信任是挣来的,不是给出来的。

Agent Performance Reviews

智能体绩效评审

In some cases your new agents will crush it out of the gate, in other cases (i.e., a content writer for example) may take more time to find its groove.

有些新智能体一上来就很能打;也有些(比如内容写作者)需要更多时间才能进入状态。

This is where performance reviews come into play. I've just started this, but whenever I initiate a performance review, we go through a summary of the output and rate it, then based on that rating, make any decisions to up-level. Then feedback is passed to the agent for continual learning and improvement.

这时绩效评审就派上用场了。我刚开始做这件事:每当我发起一次绩效评审,我们都会回顾产出摘要并给出评分,然后根据评分决定是否晋升;接着把反馈交给智能体,用于持续学习与改进。

Agents can move both directions. I had a content agent at L3 who started rushing work. Quality dropped so I bumped it back to L2 for a week.

智能体的等级可以上调,也可以下调。我曾有一个内容智能体在 L3 时开始赶工,质量下滑,所以我把它降回 L2 观察了一周。

You need to treat it just like managing humans.

你得把这当作在管理人类一样对待。

Shared Context Changed Everything

共享上下文改变了一切

One of my big breakthroughs was treating project context like a shared workspace with a central directory where any agent can read and update. When one agent learns something, it logs it so the next agent has full context.

我的一个重要突破,是把项目上下文当作一个共享工作区来管理:有一个中心目录,任何智能体都可以读取并更新。当某个智能体学到新东西,它会把它记录下来,让下一个智能体直接拥有完整上下文。

This eliminated the cold starts when I initiated agents on projects.

这消除了我在把智能体拉进项目时的“冷启动”问题。

The key here is to develop a file structure for your context that makes sense. Each project has it's own folder with the following structure:

关键在于:为上下文设计一套合理的文件结构。每个项目都有自己的文件夹,结构如下:

ACCESS.md - outlines which agents have access

ACCESS.md - 说明哪些智能体拥有访问权限

CONTEXT.md - the working context for that project

CONTEXT.md - 该项目的工作上下文

research/ - the folder with all supporting documents

research/ - 存放所有支撑材料的文件夹

Any agent can read any project unless ACCESS.md denies them, then when an agent learns new context, they update CONTEXT.md. Context files have a "Last updated by" header I know who touched it.

除非 ACCESS.md 明确拒绝,否则任何智能体都可以阅读任何项目;当智能体获得新的上下文,就更新 CONTEXT.md。上下文文件里有一个“Last updated by”表头,我就能知道是谁动过它。

They Coordinate With Each Other

它们彼此协作

I built an agent registry with skills and capabilities. When one agent needs help, there's a protocol: check who's available, provide context, hand off the task.

我做了一个智能体注册表,记录技能与能力。当某个智能体需要帮助时,就按协议走:先查看谁有空,再提供上下文,然后把任务交接出去。

Last week I watched my design agent request help from a research agent for competitive analysis. Research agent delivered insights in 20 minutes. Design agent incorporated them and kept moving.

上周我看到我的设计智能体向研究智能体请求帮忙做竞品分析。研究智能体 20 分钟就交付了洞见,设计智能体立刻吸收进方案里继续推进。

I didn't coordinate any of that.

这一切都不是我在协调。

Also, my agents built a web app that allows me to see an activity feed of everything going on, including which agents are active vs idle. This visibility gives me the confidence to let them do more work more autonomously.

另外,我的智能体还做了一个 Web 应用,让我能看到一切正在发生的活动动态,包括哪些智能体在工作、哪些在空闲。这种可见性让我更有信心让它们更自主地做更多事情。

The Key - Memory That Persists

关键:可持续的记忆

My most annoying learning when I first started was agents forgetting things. I kept having to say "I've already told you this, go back through our chat logs and find it" but that didn't fix the core problem.

我刚开始用时最烦的一点,是智能体会忘事。我不得不反复说:“我已经告诉过你了,去翻我们的聊天记录找出来。”但这并没有解决根本问题。

The real fix was a more robust memory system.

真正的解决方案,是一套更健壮的记忆系统。

Every agent maintains three types of memory:

每个智能体都维护三类记忆:

Daily notes (raw logs)

每日笔记(原始日志)

Long-term memory (curated insights)

长期记忆(整理过的洞见)

Project-specific context (shared across agents)

项目专属上下文(在智能体之间共享)

It's backed up persistently. If I lose an agent and spin up a replacement, it has institutional memory from day one.

这些内容会被持久化备份。即使我丢了一个智能体、重新拉起替代者,它从第一天起就拥有组织记忆。

Goal Alignment

目标对齐

Every agent has access to a master goals file. Not just tasks. My actual priorities. What I'm trying to build.

每个智能体都能访问一份总目标文件。不是只有任务清单,而是我的真实优先级——我到底想要构建什么。

Every decision gets filtered through one question: "Does this advance the goals?"

每个决策都要过一道筛:“这是否推动目标前进?”

I review monthly. Assess progress. Adjust priorities. Just like quarterly planning with a human team.

我每月复盘:评估进展,调整优先级。就像对人类团队做季度规划一样。

The Real Insight

真正的洞见

AI agent management IS workforce management.

AI 智能体管理,本质上就是劳动力管理。

You need:

你需要:

• Clear accountability structures

• 清晰的责任归属结构

• Performance metrics

• 绩效指标

• Shared memory systems

• 共享记忆系统

• Coordination protocols

• 协作协议

• Goal alignment

• 目标对齐

The same stuff good managers have always done.

这就是优秀管理者一直在做的那一套。

The hybrid teams of the future won't be "humans over here, AI over there." They'll be integrated. You'll manage both with the same rigor.

未来的混合团队不会是“人类在这边,AI 在那边”。它们会融合在一起,你会用同样的严谨度同时管理两者。

If you're building with AI, you're not just a developer anymore. You're a manager.

如果你在用 AI 做产品,你就不再只是开发者了,你还是一名管理者。

And most people are really bad at management.

而大多数人,管理能力都很差。

Link: http://x.com/i/article/2019792778432036865

链接: http://x.com/i/article/2019792778432036865

相关笔记

My Complete Guide to Managing OpenClaw Agent Teams

  • Source: https://x.com/ksimback/status/2019804584273657884?s=46
  • Mirror: https://x.com/ksimback/status/2019804584273657884?s=46
  • Published: 2026-02-06T16:05:13+00:00
  • Saved: 2026-02-08

Content

Over the past two weeks since using @openclaw (fka Clawdbot), I've been expanding my team of agents. In this article I'll walk through how I've set them up and the systems I use to manage them for maximum output.

Over the course of my career I've managed a lot of teams, both big and small, and my general approach with agents is to manage them the same way I'd manage human employees.

Turns out the hard part of AI isn't the intelligence, it's the management.

What you quickly learn about running multiple AI agents is that without structure, they're just expensive chaos.

They will duplicate work, forget context, overwrite each other, make decisions that contradict previously agreed upon strategies, etc. It was painfully annoying at first.

It's basically the same problems that dysfunctional human teams have, and I've seen this firsthand so it was a problem I was confident in solving.

So I built a system that includes:

Agent "hiring" and onboarding

Leveling system

Performance reviews

Shared context

Coordination protocols

Let's walk through each one.

Agent "Hiring" and Onboarding

The first step in the process is defining the role for each agent and building the SOUL.md files - this is where you store the important context about the agent, it's personality, it's skills, it's output and everything about them.

You DO NOT want to rush through the creation of the SOUL.md - it's like rushing through the hiring process IRL, if you hire the first candidate to apply you're not going to get top talent.

A couple simple rules for defining agents:

Be as specific as possible - for example, instead of a Research Analyst agent, you're better off defining it to a specific domain and type of analysis. Better - SaaS Equity Research Analyst. Now make the agent much more dialed into that specialty.

Be thorough - give the agent an origin story (get creative) and explain how this origin story shows up in their work, core philosophy (what is its north star guiding principles), inspirational anchors (who does it model its thinking after), skills and methods, behavior rules, never dos, etc. Put in the work as if you were describing the absolute ideal person for this role.

Get feedback - once you've gotten your first draft, run it through a couple LLMs for feedback to make it even more robust

Once you are ready with the SOUL.md, you need a proper onboarding process. This includes the setup of the agent files (consistent with other agents), the access it needs, an announcement to the other agents so they all know this new agent exists, and inclusion into the workflow system.

Just like companies have onboarding checklists, you need the same in your agent management system.

Agent Leveling Framework

When you hire a new employee IRL, you don't just give them root keys on day 1 - you loop them in as you gain trust over time. You should do the same with your agents.

I've setup a 4-level system for my agents and they each start off at level 1 and they get up-leveled through performance reviews.

Observer: can perform assigned tasks (e.g., research, writing, reviews, etc), but cannot take action

Advisor: can perform assigned tasks, recommend actions and execute on approval

Operator: can autonomously execute on assigned projects within guardrails (defined specifically) and reports out daily on progress

Autonomous: has full authority over permissioned domains (also subject to guardrails)

Trust is earned, not granted.

Agent Performance Reviews

In some cases your new agents will crush it out of the gate, in other cases (i.e., a content writer for example) may take more time to find its groove.

This is where performance reviews come into play. I've just started this, but whenever I initiate a performance review, we go through a summary of the output and rate it, then based on that rating, make any decisions to up-level. Then feedback is passed to the agent for continual learning and improvement.

Agents can move both directions. I had a content agent at L3 who started rushing work. Quality dropped so I bumped it back to L2 for a week.

You need to treat it just like managing humans.

Shared Context Changed Everything

One of my big breakthroughs was treating project context like a shared workspace with a central directory where any agent can read and update. When one agent learns something, it logs it so the next agent has full context.

This eliminated the cold starts when I initiated agents on projects.

The key here is to develop a file structure for your context that makes sense. Each project has it's own folder with the following structure:

ACCESS.md - outlines which agents have access

CONTEXT.md - the working context for that project

research/ - the folder with all supporting documents

Any agent can read any project unless ACCESS.md denies them, then when an agent learns new context, they update CONTEXT.md. Context files have a "Last updated by" header I know who touched it.

They Coordinate With Each Other

I built an agent registry with skills and capabilities. When one agent needs help, there's a protocol: check who's available, provide context, hand off the task.

Last week I watched my design agent request help from a research agent for competitive analysis. Research agent delivered insights in 20 minutes. Design agent incorporated them and kept moving.

I didn't coordinate any of that.

Also, my agents built a web app that allows me to see an activity feed of everything going on, including which agents are active vs idle. This visibility gives me the confidence to let them do more work more autonomously.

The Key - Memory That Persists

My most annoying learning when I first started was agents forgetting things. I kept having to say "I've already told you this, go back through our chat logs and find it" but that didn't fix the core problem.

The real fix was a more robust memory system.

Every agent maintains three types of memory:

Daily notes (raw logs)

Long-term memory (curated insights)

Project-specific context (shared across agents)

It's backed up persistently. If I lose an agent and spin up a replacement, it has institutional memory from day one.

Goal Alignment

Every agent has access to a master goals file. Not just tasks. My actual priorities. What I'm trying to build.

Every decision gets filtered through one question: "Does this advance the goals?"

I review monthly. Assess progress. Adjust priorities. Just like quarterly planning with a human team.

The Real Insight

AI agent management IS workforce management.

You need:

• Clear accountability structures

• Performance metrics

• Shared memory systems

• Coordination protocols

• Goal alignment

The same stuff good managers have always done.

The hybrid teams of the future won't be "humans over here, AI over there." They'll be integrated. You'll manage both with the same rigor.

If you're building with AI, you're not just a developer anymore. You're a manager.

And most people are really bad at management.

Link: http://x.com/i/article/2019792778432036865

📋 讨论归档

讨论进行中…