|6 min read

A2A: When AI Agents Talk to Each Other

Google's Agent-to-Agent protocol opens a new frontier: AI agents that can discover, communicate with, and delegate to other agents

Google just introduced the Agent-to-Agent (A2A) protocol, and it addresses a problem I have been thinking about since I started building multi-agent systems: how do agents from different vendors, different frameworks, and different organizations communicate with each other?

This is not a theoretical question. It is the next infrastructure problem that needs solving.

The Multi-Agent Reality

If you are building serious AI agent systems, you have already discovered that a single agent is not enough. Complex tasks require multiple specialized agents, each with different capabilities, different access permissions, and potentially different underlying models.

In my own work with Loki Mode, I orchestrate dozens of agent types across eight swarms. A planning agent breaks down the task. Implementation agents write code. Review agents critique the work. Testing agents validate the results. Each agent is specialized, and the system's power comes from their coordination.

But here is the limitation: all of these agents exist within my system. They use my orchestration layer, my communication protocol, my tool infrastructure. What happens when my agent needs to collaborate with an agent from a different system? What happens when a development agent needs to coordinate with a deployment agent built by a different team using a different framework?

That is the problem A2A addresses.

What A2A Defines

The Agent-to-Agent protocol defines a standard for agents to:

Discover each other. An agent can publish an "Agent Card," a structured description of its capabilities, inputs, outputs, and communication preferences. Other agents can discover these cards and understand what an agent can do before attempting to interact with it.

Communicate. A2A defines a message format for inter-agent communication. Agents can send tasks, receive results, exchange context, and negotiate about how to collaborate. The protocol supports both synchronous request/response patterns and asynchronous task delegation.

Delegate tasks. An agent can delegate a subtask to another agent, providing context and requirements, and receive the results when the task is complete. This enables hierarchical agent systems where a coordinator agent distributes work across specialized agents.

Handle long-running tasks. Not every agent interaction is instant. Some tasks take minutes or hours. A2A includes mechanisms for tracking task progress, handling timeouts, and managing the lifecycle of long-running operations.

A2A vs. MCP: Complementary, Not Competing

I want to address the obvious question: how does A2A relate to MCP?

They are complementary protocols that solve different problems.

MCP connects agents to tools. It defines how an agent calls a function, reads data, or interacts with an external system. MCP servers are tools; they do not have agency. They respond to requests but do not initiate actions.

A2A connects agents to agents. It defines how two autonomous entities communicate, negotiate, and collaborate. A2A participants have agency; they can accept or reject tasks, negotiate requirements, and make autonomous decisions about how to fulfill a request.

Think of it this way: MCP is how an agent uses a screwdriver. A2A is how an agent asks another agent to build a bookshelf. One is tool use; the other is collaboration.

In a complete agent system, you need both. Agents use MCP to interact with tools and A2A to interact with each other. The two protocols work at different layers of the stack and serve different purposes.

Why This Matters for Enterprise

In large organizations, AI agent systems will not be built by a single team. Different teams will build agents for different domains: one team builds deployment agents, another builds monitoring agents, another builds security agents. These agents need to work together.

Without a standard protocol, you end up with point-to-point integrations between every pair of agent systems. If you have N agent systems, you need N-squared integrations. This does not scale.

A2A provides the standard interface. Your deployment agent speaks A2A. Your monitoring agent speaks A2A. They can collaborate without knowing anything about each other's implementation. The deployment agent can ask the monitoring agent to verify that a deployment succeeded. The monitoring agent can ask the deployment agent to roll back if anomalies are detected.

This is the same benefit that HTTP brought to web services. A standard communication protocol enables an ecosystem of interoperable services.

The Agent Card Pattern

The discovery mechanism in A2A is particularly interesting. An Agent Card includes:

  • Capabilities: What the agent can do (code review, deployment, data analysis)
  • Input/output schemas: What data the agent needs and what it produces
  • Authentication requirements: How to authenticate with the agent
  • Rate limits and constraints: Operational boundaries
  • Communication preferences: Supported protocols and message formats

This is essentially an API specification for an agent. But unlike a traditional API spec, it describes an autonomous entity with judgment, not a deterministic function. The agent might handle the same request differently based on context, workload, or its own assessment of the best approach.

For agent orchestration systems like Loki Mode, Agent Cards enable dynamic team composition. Instead of hardcoding which agents handle which tasks, the orchestrator can discover available agents, evaluate their capabilities, and assemble the right team for each task dynamically.

Challenges and Open Questions

A2A is promising, but several challenges need to be addressed:

Trust and security. When agents from different organizations communicate, how do you establish trust? An agent should not blindly execute tasks from any other agent that asks. Authentication, authorization, and capability verification are essential.

Error handling across boundaries. When an agent delegates a task and the receiving agent fails, how is the failure communicated? How does the delegating agent decide whether to retry, find a different agent, or escalate to a human?

Context management. How much context should be shared between agents? Too little context and the receiving agent cannot do good work. Too much context and you risk leaking sensitive information across organizational boundaries.

Debugging distributed agent systems. Debugging a single agent is hard. Debugging a system of agents communicating across A2A is significantly harder. The observability tools for multi-agent systems do not exist yet.

Standardization timeline. A2A is new. The specification will evolve. Early adopters risk building on a moving foundation. This is the classic early-adopter trade-off.

My Perspective

A2A validates the direction I have been building toward with Loki Mode. Multi-agent systems are not a niche; they are the future of AI-assisted work. And when multi-agent systems need to cross organizational and vendor boundaries, you need a standard protocol.

I plan to add A2A support to Loki Mode. The agent swarms already have well-defined capabilities, communication patterns, and task delegation mechanisms. Exposing those through A2A would enable Loki Mode agents to collaborate with agents from other systems.

The combination of MCP for tool access and A2A for agent collaboration creates a complete communication stack for AI agents. Tools on one axis, peers on the other. An agent with both can use any tool and work with any other agent.

We are building the internet of agents. MCP is the API layer. A2A is the networking layer. The infrastructure is coming together.

Share: