How To Scale Dynamic Context
- Source: https://x.com/seejayhess/status/2017283535338344909?s=46
- Mirror: https://x.com/seejayhess/status/2017283535338344909?s=46
- Published: 2026-01-30T17:07:28+00:00
- Saved: 2026-01-31
Content

86,000 GitHub stars in a week. Everyone losing their minds over a cartoon lobster. If you've been on X at all this past week, you've seen Clawdbot, now Moltbot, now OpenClaw — the open-source AI assistant that runs on your own hardware, connects to WhatsApp or Telegram, and actually does things without you asking. Morning briefs, inbox management, calendar stuff, flight check-ins. Proactive AI. The thing everyone's been waiting for.
But here's the thing: I've been watching people set this up all week, and there's a pattern that's going to bite everyone in a few weeks.
The people getting real value out of OpenClaw are spending hours writing what they call "onboarding prompts." These massive context documents explaining who they are, how they work, what their priorities are, what boundaries the bot should never cross. I watched one guy spend most of a Saturday crafting his, feeding in his communication preferences and work schedule and dietary restrictions and all this stuff. By Tuesday the bot had forgotten half of it and was suggesting lunch meetings during his focus hours.
And that's the problem. Onboarding prompts are a hack, and they're going to degrade.
Why Onboarding Docs Fall Apart
An onboarding prompt is basically static context you paste in and hope the agent remembers. But it lives in a conversation that will eventually get cleared or compressed, and there's no structure to it — no hierarchy, no way to say "this matters more than that." Everything has equal weight, which means nothing has weight.
Worse, it's monolithic. Your preferences, your boundaries, your domain-specific context, your communication style — all crammed into one blob of text that the agent has to hold in memory even when most of it isn't relevant to what it's currently doing. This is the context explosion problem that everyone building with AI agents runs into eventually. Token windows are limited, and even when they're big enough, more context often means worse performance because the model gets confused by irrelevant information and starts paying attention to the wrong things.
The answer isn't a better onboarding doc. It's a different architecture.
CLAUDE.md Files as Localized Skills
I've been building something for Claude Code over the past few months that I think solves this, and watching OpenClaw blow up made me realize the pattern probably applies to any agent that navigates a file system — which is basically what OpenClaw does when it starts poking around your machine.
Instead of one giant context document, I scatter smaller context files throughout the file system. Each one lives in a specific directory and tells the agent what it needs to know about that particular domain. The thing that makes this work: these files basically act like localized skills that get injected depending on where the agent is looking.
When an agent navigates into a directory and reads the context file there, it's getting a dynamic skill injection. Move into your recipes folder? Cooking context loads. Move into your finances? Different context, different constraints, different behavior. The agent adapts to where it is, not just what you told it at the start of the conversation.
This is better than an onboarding doc because it's not trying to be everything at once. It takes the agent around. And you can tweak and tune specific domains without touching everything else.
The Router Pattern
Here's how the architecture actually works, and this is the part I think matters for anyone building with proactive AI.
You have one root context file that's thorough — maybe 500 lines, covering everything the agent needs to know about the system as a whole. Think of it as general orientation. But every other context file is short, maybe 50-100 lines, and its job is just to tell the agent what's in this directory, when to look here, and where to go next. These are routers.
Here's what a router actually looks like:
When the agent navigates into the recipes directory, it reads this and immediately knows what's here, when this directory is relevant, and what rules apply. It's a skill injection — the agent now has cooking-specific context it didn't have before, and it didn't have to load your entire life story to get it.
What This Actually Gets You
The obvious win is token efficiency. The agent loads context when it needs it based on where it's working, and can let go of it when it moves somewhere else. Navigate into your recipes folder? Cooking context. Navigate out? Released. You're not burning tokens on irrelevant information.
But the thing I didn't expect is how much easier maintenance becomes. When something changes about a domain, you update that directory's context file — you're not hunting through a 2000-line onboarding doc trying to find the right section. The context lives with the thing it describes, which means it actually stays up to date.
And behavior genuinely changes by location, which turns out to be kind of important. An agent working in your finances directory should be more careful than one helping with creative writing. With distributed context files, you can encode those constraints locally. The paranoid rules about not deleting anything in finances don't have to pollute your writing folder where you might actually want the agent to be more aggressive.
The part that surprised me most is how the system guides navigation. Each router doesn't just describe what's here — it tells the agent where to look next. "If you need X, check this directory. If you need Y, go there." The agent follows the trail instead of blindly searching. The file system becomes a navigable graph instead of a flat search space.
Where This Falls Apart
I should be honest about where this pattern breaks down, because it does.
The setup cost is real. You're not going to have this working in an afternoon — it took me a few weeks of iteration before the router structure felt right, and I'm still tweaking it. If you just need something quick for a weekend project, a simple onboarding doc is probably fine. This is infrastructure for agents you're going to use long-term.
It also requires the agent to actually read the context files, which isn't automatic in every setup. Claude Code does this naturally with CLAUDE.md files, but if you're building something custom or using Moltbot, you might need to wire up the "read the context file when entering a directory" behavior yourself. The pattern is sound; the plumbing varies.
And there's a real risk of over-engineering. I've caught myself creating context files for directories that really didn't need them, and now I have files that just say "this folder has miscellaneous stuff" which is... not useful. The rule I've landed on: if you're scrolling through a context file to find the relevant part, it's too long and should be split. But if you're creating a context file and struggling to fill it with anything meaningful, you probably don't need one there.
How to Actually Start
You don't have to build the whole system upfront. Start with one root context file that describes your overall setup — who you are, what the major domains are, what global constraints apply everywhere. Use it for a week.
When you notice the agent getting confused in a specific area, that's when you create a context file there. "Oh, it keeps messing up my meal planning — let me add context for that directory." The system grows organically from actual friction, not from trying to anticipate everything in advance.
When a context file gets too long, split it. The parent becomes a router pointing to children that have the detailed context. The hierarchy deepens naturally as complexity requires it.
What This Means for Moltbot
The people writing detailed onboarding prompts for Moltbot have figured out the right insight — proactive AI needs deep context to work well. But they're solving it with static documents that will degrade over time and don't scale.
The distributed context file pattern is basically the infrastructure version of that same idea. Context that lives where it's relevant, loads when it's needed, and guides the agent's navigation through your file system. Localized skills instead of a monolithic briefing that tries to be everything at once.
And here's where I think this goes: right now everyone's building these systems from scratch. Moltbot users writing onboarding prompts, Claude Code users writing CLAUDE.md files, companies building internal knowledge bases and trying to figure out how to expose them to agents. But the pattern is the same — you need structured, persistent context that shapes how an agent behaves, and it needs to be maintainable and efficient.
Someone's going to productize this. Not the agents themselves, but the context layer that makes agents actually useful. The meta-harness, if you want to call it that. Right now we're all doing it with markdown files and directory structures, which works but feels like we're hand-coding HTML in 1996. The infrastructure version of this is coming, and it's going to be what separates agents that feel like assistants from agents that feel like chatbots you have to babysit.
Moltbot is showing everyone what proactive AI looks like. The context architecture is what makes it actually work.
If you're building with any agent that touches files — and that's most of them now — this pattern is worth understanding. The onboarding doc gets you started. The distributed context system is what keeps it working.
Link: http://x.com/i/article/2016710470304976896