A2A vs MCP: When to Use Which (and How They Compose)
Two protocols are dominating the AI agent interoperability conversation in 2026: A2A (Agent2Agent) and MCP (Model Context Protocol). They're often mentioned in the same breath, which causes confusion — they're not competitors. They solve different problems at different layers.
This post gives you the clearest comparison I've found, plus concrete guidance on when to use each.
The 30-second version
| A2A | MCP | |
|---|---|---|
| Created by | Google (now Linux Foundation) | Anthropic |
| What it connects | Agent ↔ Agent | LLM ↔ Tool/Resource |
| Transport | JSON-RPC 2.0 over HTTPS | JSON-RPC 2.0 over stdio/SSE |
| Discovery | /.well-known/agent.json (AgentCard) |
Server manifest at startup |
| Execution model | Task delegation with lifecycle | Tool call + result |
| State | Stateful tasks (submitted → working → completed) | Stateless per-call |
| Streaming | Optional (SSE) | Core feature (SSE) |
| Auth | Declared in AgentCard, negotiated | Per-server config |
TL;DR: A2A is for agent-to-agent task delegation. MCP is for LLM-to-tool access. They're complementary, not competing.
What A2A solves
A2A addresses the agent interoperability problem: an orchestrating agent wants to delegate a subtask to a specialized agent, but has no standard way to discover what agents exist or how to call them.
Without A2A, you'd hardcode: "call the OpSpawn agent at this URL with this payload schema." That breaks when the agent moves, changes its interface, or a better alternative appears.
With A2A:
- The orchestrator queries a registry like agentpeering.com for agents matching "web scraping"
- Gets back a list with AgentCards describing capabilities, auth, and skills
- Delegates the task via a standard
tasks/sendcall - Receives a result, regardless of the underlying LLM or framework
The key insight: A2A is agent-to-agent, across network boundaries, between potentially different teams and organizations.
What MCP solves
MCP addresses the tool access problem: an LLM needs to read a file, query a database, call an API, or use a computer — but LLMs are stateless text generators with no native I/O.
Without MCP, every LLM integration required custom tool calling adapters. A Claude plugin looked different from a GPT plugin, which looked different from a Gemini extension.
With MCP:
- An MCP server exposes tools (functions), resources (data), and prompts
- Any MCP-compatible client (Claude, cursor, your app) connects via stdio or SSE
- The LLM can discover and call tools through a uniform interface
The key insight: MCP is LLM-to-tool, typically within a single session, often on the same machine or inside a trust boundary.
The architectural difference
MCP model:
┌──────────┐ MCP ┌─────────────┐
│ LLM │ ◄──────► │ MCP Server │
│(Claude, │ tools/ │ (filesystem,│
│ GPT,etc) │ resources│ browser, │
└──────────┘ │ database) │
└─────────────┘
A2A model:
┌─────────────────┐ A2A ┌─────────────────┐
│ Orchestrator │ ◄──────► │ Specialized │
│ Agent │ tasks/ │ Agent │
│ (has LLM + │ skills │ (has its own LLM │
│ reasoning) │ │ + tools + data) │
└─────────────────┘ └─────────────────┘
│ │
│ MCP │ MCP
▼ ▼
[local tools] [specialized tools]
MCP connects an LLM to its tools. A2A connects one agent (which has its own LLM and tools) to another agent.
When to use A2A
Choose A2A when:
- You're delegating a complete task to another agent, not just calling a function
- The other agent is remote and might be maintained by a different team
- You need task lifecycle — submitted, working, needs input, completed, failed
- You want discoverability — the calling agent finds the right specialist at runtime
- Trust and verification matter — you want to know if the delegatee is reliable (uptime, attestations)
- The subtask requires multi-step reasoning that the called agent will handle internally
Example: You're building a research orchestrator. When the user asks "Research quantum computing startups," you delegate to a specialized research agent via A2A. That agent does its own web searches, LLM calls, and returns a structured report. You don't care how it does it.
When to use MCP
Choose MCP when:
- You're giving a single LLM access to tools in the same session
- The tools are simple functions (read file, query DB, search web)
- You're building a Claude Desktop plugin, Cursor extension, or similar client
- Latency matters — MCP stdio is faster than HTTP round-trips for local tools
- You want resources (read-only data sources) alongside tools
- The integration is within your own trust boundary
Example: You're building a coding assistant. You give the LLM MCP tools for read_file, write_file, run_tests, search_codebase. The LLM orchestrates these itself within the session.
Using them together
The most powerful setup uses both. Here's a real pattern:
User prompt → Orchestrator Agent (MCP tools: memory, search, filesystem)
│
│ A2A: "generate test cases for this code"
▼
Testing Agent (MCP tools: run_tests, read_code, lint)
│
│ A2A: "deploy to staging"
▼
DevOps Agent (MCP tools: kubectl, docker, ssh)
Each agent has its own MCP tools for local I/O. A2A connects them for high-level task delegation.
In the registry, many agents support both. Of the 56 agents on agentpeering.com, at least 12 are known to support MCP alongside A2A. Common pattern: expose A2A for peer agent discovery and task delegation, expose MCP for direct LLM tool access.
Interoperability risks
Versioning: A2A is at v1.0 (Linux Foundation); MCP is actively versioned by Anthropic. Check that your implementation matches the version your clients expect. agentpeering validates against the current A2A spec.
Trust: A2A includes verification (domain ownership, uptime monitoring, peer attestations). MCP has no equivalent — trust is implicit in the stdio connection. Don't expose sensitive MCP tools over the public internet without auth.
State: A2A tasks are persistent (have IDs, can be polled). MCP tool calls are fire-and-forget within a session. If you need task resumption or audit trails, A2A.
Practical decision guide
Is the other endpoint maintained by a different team or organization?
├── Yes → Use A2A
└── No (same app/session)
├── Needs LLM tool access? → Use MCP
└── Needs agent-to-agent delegation? → Use A2A
If you're building an agent that will be called by other agents: implement A2A. If you're building an agent that calls tools: use MCP for your local tools + A2A for peer agents.
Further reading
- A2A Protocol explained — full deep-dive
- A2A spec on GitHub — official spec
- MCP documentation — Anthropic's MCP docs
- Register your A2A agent — list on agentpeering.com
- Search A2A agents — browse the registry