I’ve been using AI agents to write code for a couple of years now. First as a kind of turbo-autocomplete, then as an assistant that could take a task and solve it, and finally as something that started to resemble a colleague. But there was always a problem: they couldn’t talk to each other.
That sounds like a small thing. It isn’t.
Silos are expensive — for machines too
Subagents in Claude Code were already powerful. You could spawn one agent to write backend, another for frontend, a third for tests. But they worked in silos. Each agent received its task, delivered its result back to the main session, and knew nothing about what the others were doing. If the backend agent made a decision about the data format, the frontend agent only found out when the code failed.
That reminds me of something. It reminds me of real teams sitting in separate corners throwing things over the fence. We know that doesn’t work for humans. Why would it work for machines?
Agent Teams in Claude Code solves exactly that problem. And the difference isn’t quantitative — it’s not just faster or more efficient. It’s qualitative. It’s a different way of working.
What Agent Teams actually is
Agent Teams is Anthropic’s answer to multi-agent orchestration in a developer context. One Claude Code session acts as team lead. It spawns teammates — each with their own context window, their own file system access, their own tools. They communicate directly with each other via a messaging system, and they share a common task list with dependencies.
The important thing here isn’t that they can work in parallel. Subagents could do that too. The important thing is that they can coordinate. A backend agent can send a message to the frontend agent: “I changed the response format, here’s the new structure.” The frontend agent adapts without the main session needing to play middleman. It’s not just delegation — it’s collaboration.
And that’s precisely why it works better than subagents for anything involving more than one context.
Coordination problems require communication
Most interesting software problems aren’t decomposable into isolated parts. Implementing a feature requires frontend, backend and tests to stay in sync. A database schema change affects API contracts that affect UI components that affect the test suite. Everything is connected.
Subagents treated it as though it wasn’t. They solved their part in a vacuum and hoped the pieces would fit together. Sometimes they did. Often they didn’t.
Agent Teams treats it as what it is: a coordination problem. And coordination requires communication between workers — not just reporting to a leader.
I’ve used it for a range of different things now, and the qualitative difference is noticeable. When agents can talk to each other across contexts — challenge each other’s assumptions, share discoveries mid-process, self-coordinate via a shared task list — something different happens. They find bugs faster because they don’t wait to integrate. They make fewer assumptions because they can ask each other. They produce code that holds together because they know what the others are doing.
The actual architecture
Let me be a bit more concrete. When you start an agent team, here’s what happens:
The team leader creates a shared task list. Each task has a title, a description, a status, and optional dependencies on other tasks. The leader spawns teammates — typically 3-5 for most workflows — and assigns them work via the task list.
Teammates work independently in their own context window. They can read files, write code, run commands. But they can also send messages directly to other teammates. And they can look at the task list to see what’s remaining, what’s blocked, and what they can pick up next.
When a teammate finishes its task, it marks it as completed and looks for the next available task. If it’s blocked, it sends a message to the teammate that’s blocking it. Everything runs asynchronously, and the leader can check status at any time, send messages to individual teammates, or reassign tasks.
It’s fundamentally an asynchronous, message-based architecture with a central task queue. If that sounds familiar, it’s because it’s the same pattern that drives the best software teams. Not because it’s copied from human organization — but because it’s the most effective way to coordinate parallel work.
When it makes sense — and when it doesn’t
Agent Teams isn’t always the right tool. If you need to fix a bug in one file, it’s overkill. If you need to implement an isolated function, a subagent is simpler and cheaper. Agent Teams uses significantly more tokens — each teammate has its own context window, and communication costs.
But there are scenarios where it’s not just better — it’s in a completely different league.
Cross-layer features. When a feature spans frontend, backend, database and tests, and the four layers need to stay in sync. Give each teammate a layer and let them coordinate directly.
Debugging with competing hypotheses. Spawn four agents that each investigate a different theory about what’s wrong. Let them challenge each other’s findings. The one that survives peer review is probably closer to the truth.
Code review with multiple perspectives. One agent focuses on security, another on performance, a third on test coverage. They find things that a single pass misses because they’re looking through different lenses.
Large migrations. I’ve previously used Claude Code to migrate 11 servers from Digital Ocean to Hetzner over a weekend. With agent teams, that kind of task would be even more natural — one agent per service, with communication about shared dependencies like DNS, networking and databases.
It’s not about throwing more agents at a problem. It’s about giving them the ability to collaborate on it.
What it means for businesses
Here’s the interesting part for those paying for it: Agent Teams isn’t just a developer tool. It’s an organizational model.
Most companies don’t struggle with AI being able to write code. It can. They struggle with AI-generated code rarely holding together in a larger context. Frontend and backend don’t match. Data models are inconsistent. Tests don’t cover the right things. It’s the integration problems — not the production problems — that eat the time.
Agent Teams addresses exactly that. Not by being better at writing code, but by being better at coordinating code. And that’s the part most organizations are missing.
Nine out of ten IT leaders in Denmark say AI agents risk creating more confusion than value. That’s not because the agents are bad. It’s because they run in silos. Each agent solves its task in a vacuum, and nobody ensures the pieces fit together. Agent Teams is a concrete answer to how you solve that — at least in a software development context.
And the context matters. Agent Teams isn’t a general solution to all of an organization’s AI problems. It’s a specific tool for a specific problem: coordinated software development with multiple AI agents. But that problem is real, and the solution works.
The token cost and when it pays off
Let’s be honest about the costs. An agent team with three teammates uses roughly 3-4 times as many tokens as a single session solving the same task sequentially. That’s not free.
But the calculation isn’t tokens per task. The calculation is tokens per well-integrated solution. If three subagents produce code that requires two hours to integrate manually, and an agent team produces code that works from the start, it’s not hard to see which is cheaper.
It’s the same argument as for pair programming. Yes, it costs two developers instead of one. But it produces better code, fewer bugs, and saves time downstream. Agent Teams is pair programming for AI — just with more parties and lower hourly rates.
MCP as the foundation
One thing that makes Agent Teams particularly interesting is the combination with MCP — Model Context Protocol. MCP has become the de facto standard for how AI interacts with external systems. There are over 1,000 community-built MCP servers, and even OpenAI has adopted the protocol.
At Brokk & Sindre, I’ve built MCP servers for Danish public data — Statistics Denmark, the Central Business Register — and an agent-first API for one of Denmark’s largest property databases. When you combine Agent Teams with MCP, you get agents that can not only coordinate with each other, but also pull from external data sources along the way. One agent can query a database while another writes code that uses the results. That’s integration in real time, not in sequence.
What’s missing
Agent Teams is still experimental. You have to actively enable it in your settings. And there are limitations.
Coordination overhead grows with the number of agents. More than five teammates, and you spend a disproportionate amount of tokens on communication instead of work. It’s the same phenomenon as in human teams — Brooks’ Law apparently applies to machines as well.
Conflict resolution is primitive. If two agents edit the same file, the leader doesn’t always catch it in time. Worktrees help — each agent can work in its own isolated copy of the codebase — but it requires thinking about it ahead of time.
And debugging is hard. When something goes wrong in an agent team, it’s not always clear which agent made the wrong decision, and when that decision propagated to the others. It’s distributed systems debugging, and that’s hard regardless of whether the agents are humans or machines.
But these are all solvable problems. And what they can already do today — coordinated, parallel software development with direct inter-agent communication — is enough to change how I work.
The right way to think about it
Agent Teams isn’t a replacement for subagents. It’s a new tool in the toolbox. Subagents are still better for quick, focused tasks. Agent Teams is better for anything that requires multiple contexts to stay in sync.
The right analogy isn’t “more AI.” It’s better organized AI. It’s the difference between hiring five freelancers who never talk to each other, and putting together a team that communicates and coordinates. The code they produce is the same kind. But the way it holds together is fundamentally different.
And that’s what counts in production. Not whether each individual file is elegant, but whether the whole thing works.
I think we’re at the beginning of something. Not in the hyped “AI changes everything” sense. But in the practical sense that multi-agent coordination is the next layer of productivity, just as AI-assisted coding was the previous one. It’s not magic. It’s just better tooling for a real problem.
And that problem — coordination — is and remains the hardest thing in software development. Regardless of whether it’s humans or machines writing the code.