The Orchestrator Developer: Why I use multiple AI agents instead of one
Stop debating which AI tool is best. I use multiple agents — each for what they're good at. Here's how a two-agent setup changed how I think about development.
Everyone's debating which AI coding tool is best. Copilot or Cursor? Claude Code or Windsurf?
I stopped choosing. I use multiple AI agents — each for what they're actually good at.
And honestly? It changed how I think about development entirely.
The wrong question
"Which AI tool should I use?" feels like the obvious question. Forums are full of comparisons. Twitter threads break down features. Everyone wants the one tool to rule them all.
But here's what I've learned: the best AI tool depends on the task. And different tasks need different tools.
What if the answer isn't Copilot or Claude Code — but a system where multiple agents work together?
My two-agent setup
After months of experimentation, I've settled on a setup that sounds simple but works remarkably well:
Otto is my async coordinator — built on Clawdbot with Claude as its brain. He runs 24/7 on my Mac, connected to Telegram. I can message him anytime, anywhere.
Claude Code is my executor — running in JetBrains (PHPStorm), doing the actual coding work.
They serve completely different purposes.
Otto: the coordinator
Otto handles everything that doesn't need to happen right now:
- Research — "Otto, research how AI agents are changing developer workflows." He digs through sources, summarizes findings, and stores everything in my Obsidian vault.
- Memory — He remembers context across sessions. Yesterday's conversation? He knows. That project we discussed last week? Still there.
- Second brain management — He organizes my notes, captures quick ideas from Telegram, and maintains my content pipeline.
- Planning & specs — Before Claude Code touches any code, Otto often writes the specs. What needs to be built? What are the requirements? What's the context?
The key insight: Otto works asynchronously. I can ask him to research something and check back hours later. He's not blocking my flow — he's working in parallel.
Claude Code: the executor
When it's time to actually build, Claude Code takes over:
- Complex refactoring — Multi-file changes that would take me hours? Claude Code handles it in minutes.
- Code reviews — Point it at a PR, ask for a review. It catches things I'd miss.
- Tests — "Write tests for this feature" is now a 30-second task.
- Sub-agents — This is where it gets interesting. Claude Code can spawn sub-agents that work in parallel. One reviews, another writes tests, a third refactors — all at once.
The key insight: Claude Code is interactive. It needs my attention, my approval, my direction. It's powerful, but it's not async.
Why two agents beat one
The magic isn't in either tool alone. It's in how they complement each other.
Scenario: Building a new feature
- I message Otto: "I want to redesign the content dashboard with a modern look."
- Otto analyzes the current code, researches design patterns, and writes a spec.
- I review the spec, make adjustments.
- I open Claude Code in PHPStorm: "Read the spec Otto wrote. Build this."
- Claude Code implements. Sub-agents run tests.
- I review and ship.
My involvement? Direction and review. The heavy lifting? Distributed across agents.
Scenario: Content creation
- Quick idea on my phone → Telegram to Otto: "💡 Article idea about AI orchestration"
- Otto captures it in Obsidian.
- Later: "Otto, research this topic and outline potential angles."
- Otto researches, proposes options.
- I pick one. Otto writes a first draft.
- I edit and publish.
This article you're reading? It followed exactly this flow.
The orchestrator mindset
Using multiple AI agents requires a shift in thinking. You're no longer a developer who uses an AI tool. You're an orchestrator managing a small team of AI agents.
This means:
Knowing which tool for which task. Research and planning? Otto. Building and testing? Claude Code. Quick capture? Otto via Telegram. Deep refactoring? Claude Code with full context.
Designing handoffs. Otto's research becomes Claude Code's context. The output of one feeds the input of another.
Staying in control. AI agents are powerful, but they're not infallible. You review. You verify. You're still the pilot — you just have very capable co-pilots now.
What I've learned
More tools ≠ more productivity. Two well-integrated agents beat five poorly coordinated ones. Don't chase every new tool.
Context is everything. A great prompt with good context beats a mediocre prompt to a "better" model. Invest in giving your agents the right information.
Async is underrated. The ability to offload research and planning to an agent that works while I do other things? Game changer.
The human orchestrator is irreplaceable. AI agents are tools. Powerful tools, but tools nonetheless. The creative direction, the judgment calls, the final decisions — that's still you.
Looking ahead
Right now, I'm the bridge between Otto and Claude Code. I relay context, I make decisions, I connect the dots.
But I can see a future where agents coordinate more directly. Where Otto doesn't just write specs, but hands them directly to Claude Code. Where the orchestration layer gets thinner.
We're not there yet. For now, the orchestrator role is essential.
And honestly? I kind of like it. There's something satisfying about conducting a small orchestra of AI agents, each playing their part in building something real.
Try it yourself
You don't need my exact setup. The principle is simple:
- Pick a coordinator — something that runs async, has memory, and can be reached anytime. (Clawdbot, a custom GPT, even a well-configured ChatGPT conversation.)
- Pick an executor — something that's great at actual coding. (Claude Code, Cursor, Copilot with agent mode.)
- Design your handoffs — how does research become specs? How do specs become code?
- Stay in the loop — review, verify, direct.
The future of AI-assisted development isn't finding the perfect tool. It's becoming a better orchestrator.