AI & Agents

How to Use Google's A2A Protocol for Agent-to-Agent Communication

Google's Agent2Agent (A2A) protocol gives AI agents a standard way to find each other, exchange tasks, and collaborate across different frameworks. This guide covers how A2A works, how it fits alongside MCP, and how to connect A2A agents with persistent shared storage for real production workflows.

Fast.io Editorial Team 13 min read
Diagram showing AI agents communicating through the A2A protocol

What Is Google's A2A Protocol?

The Agent-to-Agent (A2A) protocol is Google's open standard for enabling AI agents built on different frameworks to discover, communicate, and collaborate with each other across organizational boundaries. Google announced A2A in April 2025 with over 50 launch partners, including Salesforce, SAP, Atlassian, and LangChain. The protocol was later donated to the Linux Foundation, establishing vendor-neutral governance.

A2A solves a specific problem: agents built with different toolkits (LangChain, CrewAI, Google ADK, AutoGen) have no built-in way to talk to each other. Without a shared communication protocol, multi-agent systems become brittle custom integrations. A2A provides that shared language.

The protocol works through three core mechanisms:

  • Agent Cards: JSON documents that describe what an agent can do, published at a well-known URL (typically /.well-known/agent.json). Think of them as a digital business card that other agents can read to decide whether to delegate work
  • Tasks: The unit of work in A2A. A client agent sends a task to a remote agent, and that task moves through a defined lifecycle (submitted, working, input-required, completed, failed, canceled)
  • Messages and Artifacts: Agents exchange messages during task execution and produce artifacts (files, structured data) as output

A2A uses the JSON-RPC protocol (version 2) over HTTPS for communication, with support for server-sent events (SSE) for streaming responses. This makes it straightforward to implement in any language with an HTTP client.

AI agent communication and workspace visualization

A2A vs MCP: How They Work Together

The most common question about A2A is how it relates to the Model Context Protocol (MCP). They are not competitors. They handle different layers of agent architecture.

MCP is vertical: it connects a single agent to tools, data sources, and services. When your agent needs to read a file, query a database, or call an API, MCP provides the standardized interface for that.

A2A is horizontal: it connects agents to other agents. When your agent needs to delegate a task to a specialist agent, or coordinate work across multiple agents owned by different teams, A2A provides the protocol for that.

Here is a practical breakdown:

  • MCP handles: File access, database queries, API calls, tool execution, context retrieval
  • A2A handles: Agent discovery, task delegation, status updates, result collection, multi-agent coordination

In a production multi-agent system, you use both. An orchestrator agent discovers specialist agents through A2A agent cards, delegates tasks using A2A's task protocol, and each specialist agent uses MCP to access the tools and data it needs to complete its work.

For example, a research agent might receive a task via A2A to "analyze Q4 sales data." That agent then uses MCP to connect to a Fast.io workspace where the sales reports are stored, queries the files using built-in RAG, and returns the analysis as an A2A artifact.

According to Google's A2A documentation, this complementary design was intentional from the start. The A2A specification explicitly references MCP as the tool-integration layer, while A2A focuses on the agent-to-agent coordination that MCP does not address.

Fast.io features

Give your A2A agents persistent storage

Connect A2A agents to shared workspaces with 251 MCP tools. 50GB free, no credit card, works with any LLM.

Agent Cards: How Discovery Works

Agent Cards are the foundation of A2A. Before one agent can communicate with another, it needs to know what that agent can do, where to reach it, and what inputs it expects.

An Agent Card is a JSON document hosted at a well-known endpoint. Here is a simplified example:

{
  "name": "Document Analysis Agent",
  "description": "Analyzes documents and extracts key insights",
  "url": "https://analysis-agent.example.com/a2a",
  "version": "1.0.0",
  "capabilities": {
    "streaming": true,
    "pushNotifications": false
  },
  "defaultInputModes": ["text/plain", "application/pdf"],
  "defaultOutputModes": ["text/plain", "application/json"],
  "skills": [
    {
      "id": "document-summary",
      "name": "Document Summary",
      "description": "Summarizes uploaded documents",
      "tags": ["analysis", "summary", "documents"]
    }
  ]
}

Key Fields in an Agent Card

  • name and description: Human-readable identity. Client agents use these to decide if this agent is the right one for a given task
  • url: The endpoint where A2A requests should be sent
  • capabilities: What protocol features the agent supports (streaming, push notifications)
  • skills: A list of specific things the agent can do. Each skill has an ID, name, description, and tags for matching
  • defaultInputModes / defaultOutputModes: MIME types the agent accepts and produces

Discovery Flow

  1. A client agent knows the base URL of a potential remote agent
  2. It fetches /.well-known/agent.json from that URL
  3. It reads the skills and capabilities to decide if delegation makes sense
  4. If the match is good, it sends a task using JSON-RPC

For larger systems, you can build an agent registry that indexes agent cards from across your organization. This lets orchestrator agents search for the right specialist by skill tags rather than maintaining hardcoded URLs.

Task Lifecycle and Message Flow

Once an agent discovers a suitable remote agent through its Agent Card, the real work happens through the task lifecycle. A2A defines clear states that every task moves through:

Submitted: The client agent sends a new task with an initial message 2.

Working: The remote agent has accepted the task and is processing it 3.

Input Required: The remote agent needs additional information to continue 4.

Completed: The task finished successfully, with artifacts available 5.

Failed: Something went wrong during execution 6.

Canceled: Either side terminated the task

Sending a Task

A client agent creates a task by sending a JSON-RPC request to the remote agent's endpoint. The request includes a unique task ID and an initial message describing what needs to be done.

{
  "jsonrpc": "2.0",
  "method": "tasks/send",
  "params": {
    "id": "task-abc-123",
    "message": {
      "role": "user",
      "parts": [
        {
          "type": "text",
          "text": "Analyze the uploaded sales report and identify top 3 trends"
        }
      ]
    }
  }
}

Handling Artifacts

When a remote agent completes work, it produces artifacts. These can be text responses, structured JSON, or references to files. This is where A2A connects to storage: agents that produce file-based outputs need a persistent place to store them.

A common pattern is for the remote agent to upload its output to a shared workspace using MCP tools, then return a reference to that file as an A2A artifact. This keeps the A2A messages lightweight while making the actual output accessible to all stakeholders, both agents and humans.

Streaming Responses

For long-running tasks, A2A supports streaming via SSE. The client agent opens a streaming connection, and the remote agent sends incremental updates as it works. This is useful for tasks like document analysis where you want to show progress rather than waiting for a single final response.

AI-powered document analysis with audit trail

Connecting A2A Agents to Shared Storage

Most A2A tutorials stop at "send a message, get a response." But production multi-agent systems need to handle files. Research agents produce reports. Analysis agents process datasets. Content agents generate documents. All of these outputs need to go somewhere persistent and accessible.

This is where combining A2A with MCP-connected storage changes the picture. Instead of passing large files through A2A messages (which are designed for lightweight communication), agents can share files through a common workspace.

The Pattern

  1. An orchestrator agent delegates a task via A2A to a specialist agent
  2. The specialist agent does its work and uploads output files to a shared Fast.io workspace using MCP
  3. The specialist returns an A2A artifact with a reference to the uploaded file
  4. Other agents (or humans) access the file from the same workspace

This approach has several advantages over passing files inline:

  • Persistence: Files stay available after the task completes. No ephemeral storage that expires
  • Collaboration: Humans can review agent output in the same workspace through the browser UI
  • Intelligence: With Intelligence Mode enabled, uploaded files are automatically indexed for RAG. Other agents can search and query them by meaning, not just filename
  • Access control: Workspace permissions ensure only authorized agents and humans can access the files
  • Versioning: Every file change is tracked. You can see what an agent produced at each step

Agents connect via Streamable HTTP or SSE transport to access 251 MCP tools covering file upload, workspace management, sharing, AI chat, and more. Agents connect via Streamable HTTP or SSE transport. The free agent tier includes 50GB storage, 5,000 monthly credits, and works with any LLM (Claude, GPT-4, Gemini, or local models). No credit card required.

Example: Multi-Agent Research Pipeline

Consider a research pipeline with three agents communicating via A2A:

  1. Data Collection Agent receives a task to gather market data. It uses MCP to import files from external sources via URL Import (pulling from Google Drive, OneDrive, or the web without downloading locally) into a Fast.io workspace
  2. Analysis Agent receives a task to process the collected data. It uses MCP to access the files already in the workspace, runs its analysis, and uploads a summary report
  3. Report Agent receives a task to create a final deliverable. It queries the workspace using built-in RAG to pull insights from all uploaded files, generates a formatted report, and uploads it to a branded share

At the end, the orchestrator agent can transfer ownership of the workspace to a human stakeholder. The human gets access to all files, the full audit trail, and the AI chat history. The agent keeps admin access for future updates.

Neural index showing how files are automatically indexed for AI search

Building Your First A2A Integration

Google provides official SDKs for Python, JavaScript, Java, C#/.NET, and Go. The Python SDK is the most mature and has the best documentation. Here is a practical path to getting started.

Step 1: Define Your Agent Card

Start by creating a JSON file that describes your agent's capabilities. Host it at /.well-known/agent.json on your agent's HTTP endpoint.

Focus on writing clear skill descriptions. Other agents will use these descriptions to decide whether to delegate work to you. Vague descriptions like "does stuff with data" will not get your agent selected.

Step 2: Implement the A2A Server

Your agent needs an HTTP server that handles JSON-RPC requests. The official SDKs provide base classes that handle protocol-level concerns (message parsing, task state management, error formatting). You implement the business logic.

The key methods to implement:

  • on_task_send: Called when a client sends a new task. This is where your agent does its work
  • on_task_get: Returns the current state of a task
  • on_task_cancel: Handles cancellation requests

Step 3: Add Storage with MCP

For agents that produce file outputs, connect to a shared workspace. Install the Fast.io MCP server or use the OpenClaw integration (clawhub install dbalve/fast-io) for zero-config setup.

With MCP connected, your agent can:

  • Upload output files that persist beyond the task lifecycle
  • Read files uploaded by other agents in the same workspace
  • Use file locks to prevent conflicts when multiple agents write to the same workspace
  • Query existing files using built-in RAG for context

Step 4: Test with a Client Agent

Build a simple client that fetches your agent card, sends a task, and polls for results. The SDKs include example clients that handle the full task lifecycle, including streaming and cancellation.

Common Pitfalls

  • Forgetting to handle input-required state: Your agent might need clarification during a task. Implement the input-required flow so client agents can provide additional context
  • Ignoring task cancellation: Long-running tasks should check for cancellation signals periodically. Do not ignore cancel requests
  • Ephemeral file storage: If your agent produces files, store them persistently. Files in temp directories or in-memory caches will be lost when the agent restarts
  • No observability: Log task state transitions. When something fails in a multi-agent pipeline, you need to trace which agent, which task, and which state transition went wrong

Evidence and Benchmarks

A2A is still early-stage compared to MCP, but adoption signals are strong.

According to Google's announcement, over 50 technology partners joined the A2A launch, including enterprise platforms like Salesforce, SAP, ServiceNow, and developer tools like LangChain, MongoDB, and Cohere. The Linux Foundation's adoption of A2A as an open-source project in 2025 added governance structure and vendor-neutral stewardship.

According to the Linux Foundation's announcement, enterprise early adopters like Tyson Foods and Gordon Food Service are already building collaborative A2A systems for supply chain coordination and sales automation.

Official SDKs now exist for five languages (Python, JavaScript, Java, C#/.NET, Go), and Google's Agent Development Kit (ADK) includes native A2A support for both exposing and consuming A2A agents.

The practical gap remains in storage and file handling. Most A2A implementations focus on text-based message exchange. Production systems that need to handle documents, datasets, or media files still need a separate storage layer. That is where MCP-connected workspaces fill the gap, giving A2A agents persistent, searchable, collaborative file storage without building custom infrastructure.

Frequently Asked Questions

What is Google's A2A protocol?

The Agent-to-Agent (A2A) protocol is an open standard originally developed by Google and now maintained by the Linux Foundation. It lets AI agents built on different frameworks discover each other through Agent Cards, exchange tasks using JSON-RPC over HTTP, and collaborate across organizational boundaries. Over 50 technology partners supported the launch.

How does A2A differ from MCP?

A2A handles agent-to-agent communication (horizontal), while MCP handles agent-to-tool connections (vertical). Use MCP when your agent needs to access files, databases, or APIs. Use A2A when agents need to discover and delegate work to other agents. Most production systems use both together.

Can A2A agents share files?

A2A itself is designed for lightweight message exchange, not large file transfers. The recommended pattern is for agents to upload files to a shared workspace via MCP, then pass file references as A2A artifacts. Fast.io provides 251 MCP tools for persistent file storage that any A2A agent can connect to.

How do I implement A2A in my agent system?

Start by defining an Agent Card (JSON file hosted at /.well-known/agent.json) that describes your agent's skills. Then implement a JSON-RPC server using one of the official SDKs (Python, JavaScript, Java, C#/.NET, Go). Add MCP-connected storage for file handling, and test with a client agent that can discover and send tasks to yours.

Is A2A only for Google's agent framework?

No. A2A is framework-agnostic and works with LangChain, CrewAI, AutoGen, Microsoft Semantic Kernel, and any custom agent system. Google donated the protocol to the Linux Foundation to ensure vendor-neutral governance. SDKs are available for five programming languages.

What are A2A Agent Cards?

Agent Cards are JSON documents that describe an agent's identity, capabilities, and skills. They are hosted at a well-known URL and serve as a discovery mechanism. Client agents fetch the Agent Card to determine if a remote agent can handle a specific task before sending any work.

Related Resources

Fast.io features

Give your A2A agents persistent storage

Connect A2A agents to shared workspaces with 251 MCP tools. 50GB free, no credit card, works with any LLM.