AI & Agents

How to Build Agent Communication with Google's A2A Protocol

The Agent-to-Agent (A2A) protocol is an open standard by Google that lets AI agents built on different frameworks discover each other's capabilities and communicate, regardless of underlying technology. This guide walks through the core building blocks of A2A, from agent cards to task lifecycle management, and shows how persistent file storage solves the data-sharing problem that A2A itself doesn't address.

Fast.io Editorial Team 10 min read
Google's A2A protocol standardizes how autonomous agents find and talk to each other.

What is the A2A Protocol?

The Agent-to-Agent (A2A) protocol is an open standard that enables AI agents to discover, authenticate, and communicate with each other over the internet. Google introduced A2A in April 2025 with backing from over 50 technology partners, including Atlassian, Salesforce, SAP, LangChain, and ServiceNow. The protocol is now maintained under the Linux Foundation with an Apache 2.0 license. A2A solves a specific problem: before it existed, a LangChain agent had no standard way to talk to a CrewAI agent, an AutoGen agent, or any other agent built on a different framework. Developers had to write custom API bridges for every agent pair. A2A eliminates that with a single, framework-agnostic communication layer. The protocol handles three core concerns:

  • Discovery: Agents publish "agent cards" that describe what they can do, so other agents can find the right collaborator for a task
  • Communication: Messages flow over JSON-RPC on HTTPS, with support for synchronous requests and asynchronous streaming via Server-Sent Events (SSE)
  • Task management: Every interaction is wrapped in a task with a defined lifecycle (submitted, working, completed, failed, canceled), so both sides can track progress

A2A is designed to complement the Model Context Protocol (MCP), not replace it. MCP handles how an agent connects to tools and data sources. A2A handles how agents connect to each other. Most production multi-agent systems will use both.

Visualization of a neural network representing connected AI agents

How Agent Cards Work

Agent cards are the foundation of A2A's discovery system. Every A2A-compatible agent publishes a JSON document at a well-known URL (typically /.well-known/agent.json) that describes its capabilities, accepted input types, and authentication requirements. Here's a simplified agent card:

{
  "name": "Document Processor",
  "description": "Extracts structured data from PDFs and images",
  "url": "https://doc-agent.example.com",
  "capabilities": {
    "streaming": true,
    "pushNotifications": true
  },
  "skills": [
    {
      "id": "extract-invoice",
      "name": "Invoice Extraction",
      "description": "Extracts line items and totals from invoice PDFs",
      "inputModes": ["application/pdf", "image/png"],
      "outputModes": ["application/json"]
    }
  ],
  "authentication": {
    "schemes": ["oauth2"]
  }
}

A client agent reads this card to decide whether the remote agent can handle a specific job. The skills array is the most important part. Each skill declares its input and output types, so the client knows exactly what to send and what to expect back. Agent cards also support capability signing (added in recent A2A versions), which lets agents cryptographically verify that a card hasn't been tampered with. This matters in enterprise environments where agents from different organizations need to trust each other. The discovery model is intentionally simple. There's no central registry. Agents either know each other's URLs directly, or they can be listed in a directory service. This keeps the protocol decentralized and easy to deploy.

The A2A Task Lifecycle

Every A2A interaction revolves around tasks. A client agent creates a task by sending a message to a remote agent, and that task moves through a defined set of states until it completes or fails.

Task states:

  1. Submitted - the client has sent a request and the remote agent has acknowledged it
  2. Working - the remote agent is actively processing the task
  3. Input Required - the remote agent needs more information from the client before continuing
  4. Completed - the task finished successfully and artifacts are available
  5. Failed - something went wrong, and the remote agent has returned an error
  6. Canceled - either side terminated the task early

The "input required" state is what makes A2A different from a simple request-response API. Agents can have multi-turn conversations within a single task. A research agent might start processing a query, realize it needs clarification, pause to ask the client agent for more details, then resume once it gets an answer.

Artifacts are the output of a completed task. They can be structured data (JSON), text, or references to external files. This is where A2A intersects with storage: the protocol itself doesn't define how large files are exchanged. It sends messages and metadata. If your agents need to share actual files like documents, images, or datasets, you need a storage layer alongside A2A. For real-time updates during long-running tasks, A2A supports Server-Sent Events (SSE). The client opens a streaming connection and receives status updates as the remote agent works. Version 0.3 also added gRPC transport for higher-throughput scenarios.

Why A2A Agents Need Persistent File Storage

A2A is a communication protocol. It tells agents how to discover each other and exchange messages. But it doesn't store anything. Once a task completes, the artifacts exist only in that session's context. If another agent needs those files later, or if a human needs to review the output, the data needs to live somewhere persistent. This is the gap between "agents can talk" and "agents can collaborate on real work." Consider a typical multi-agent workflow:

  1. A research agent gathers data and produces a 50-page report
  2. A summarization agent condenses it into key findings
  3. An editing agent refines the language
  4. A human reviews the final output

Each handoff involves files. The research agent's output becomes the summarization agent's input. Without shared storage, each agent has to send the full file content through A2A messages. That's inefficient for large files and leaves no audit trail. Persistent storage solves this by giving agents a shared workspace. Instead of passing files through messages, agents write to and read from a common location. The A2A messages then carry references (URLs or file IDs) rather than raw content. Fast.io's agent storage tier was built for exactly this pattern. Agents sign up for their own accounts, create workspaces, and manage files through a REST API or 251 MCP tools. The free tier includes 50 GB of storage, 5,000 monthly credits, and support for files up to 1 GB, with no credit card required.

Multiple users and agents collaborating in a shared workspace
Fast.io features

Give Your AI Agents Persistent Storage

Fast.io gives AI agents their own cloud storage accounts with 50 GB free. Store files, share between agents, and hand off to humans when the work is done. No credit card required.

Implementing A2A with Shared Storage

Here's how to wire A2A agent communication together with persistent file storage in practice.

Step 1: Set Up Your Agent's Storage

Before your agent publishes its A2A agent card, give it a place to store files. With Fast.io, your agent can create a workspace in a single API call:

import requests

workspace = requests.post(
    "https://api.fast.io/workspaces",
    headers={"Authorization": f"Bearer {agent_token}"},
    json={"name": "Research Agent Output", "intelligence": True}
)
workspace_id = workspace.json()["id"]

Setting intelligence: True enables auto-indexing. Other agents (or humans) can later search and query the files using natural language through Fast.io's built-in RAG.

Step 2: Publish Your Agent Card

Your agent card should describe what file types your agent produces. Include the storage workspace reference in your card's metadata:

{
  "name": "Research Agent",
  "skills": [
    {
      "id": "deep-research",
      "name": "Deep Research",
      "outputModes": ["application/json"],
      "description": "Researches a topic and stores the report in shared storage. Returns a file reference."
    }
  ]
}

Step 3: Return File References in Task Artifacts

When your agent completes a task, include a reference to the stored file rather than the file content itself:

{
  "taskId": "task-abc-123",
  "status": "completed",
  "artifacts": [
    {
      "type": "file_reference",
      "mimeType": "application/pdf",
      "uri": "https://fast.io/workspace/abc/files/research-report.pdf",
      "metadata": {
        "pages": 47,
        "workspace_id": "abc",
        "node_id": "file-xyz"
      }
    }
  ]
}

The downstream agent receives this reference, fetches the file from shared storage, and processes it. Both agents have access to the same workspace, and the file persists after the A2A task ends.

Step 4: Handle Multi-Agent Access

When multiple agents read and write to the same workspace, use file locks to prevent conflicts:

### Acquire lock before writing
lock = requests.post(
    f"https://api.fast.io/workspaces/{workspace_id}/files/{file_id}/lock",
    headers={"Authorization": f"Bearer {agent_token}"}
)

### Write the file, then release when done
requests.delete(
    f"https://api.fast.io/workspaces/{workspace_id}/files/{file_id}/lock",
    headers={"Authorization": f"Bearer {agent_token}"}
)

This pattern works well for pipelines where agents take turns processing a file. For fully parallel workflows, each agent can write to separate files and a coordinator agent merges the results.

Fast.io workspace showing AI-indexed files with search and audit capabilities

A2A vs MCP: Which Protocol Does What?

A common point of confusion: A2A and MCP are not competitors. They operate at different layers of the agent stack.

MCP (Model Context Protocol) is a standard from Anthropic for connecting an AI model to external tools and data sources. It answers: "How does my agent use a database? Read a file? Call an API?" MCP gives agents hands.

A2A (Agent-to-Agent Protocol) is a standard from Google for connecting agents to each other. It answers: "How does my research agent delegate to a summarization agent? How do they share results?" A2A gives agents a phone line. In a production multi-agent system, you'll typically use both:

  • Each agent uses MCP to connect to its own tools (file storage, databases, APIs, web browsers)
  • Agents use A2A to find each other, delegate tasks, and coordinate work

For file operations specifically, an agent might use Fast.io's MCP server (251 tools via Streamable HTTP) to manage its own files, while using A2A to tell other agents where those files are stored. The protocols also differ in transport. MCP typically runs as a local process or over stdio, though it now supports HTTP transports. A2A runs over the internet via JSON-RPC on HTTPS, designed for cross-network, cross-organization communication. For a deeper comparison, see our A2A vs MCP guide.

Getting Started with A2A

Google provides official SDKs for Python, JavaScript, Java, C#/.NET, and Go. The fast way to start is with the Python SDK.

Install the SDK:

pip install a2a-sdk

The SDK handles JSON-RPC message routing, task state management, and SSE streaming. You define your agent's skills and the logic for each one. The SDK takes care of the protocol layer.

Key resources for implementation:

Deployment options:

Google offers three paths for hosting A2A agents: Agent Engine (fully managed), Cloud Run (serverless), and GKE (maximum control). But A2A is an open protocol. You can host agents anywhere that serves HTTPS, including your own infrastructure.

Authentication:

A2A uses standard web authentication. Agent cards declare supported auth schemes (API keys, OAuth 2.0, OpenID Connect). Tokens are scoped per task and short-lived. For enterprise deployments, OpenID Connect Discovery provides automatic configuration. The protocol specification is versioned. The current stable release is v0.3, which added gRPC support, signed agent cards, and expanded the Python SDK. Check the official docs for the latest version before building.

Frequently Asked Questions

What is Google's A2A protocol?

The Agent-to-Agent (A2A) protocol is an open standard created by Google that lets AI agents from different frameworks communicate with each other. It uses agent cards for capability discovery, JSON-RPC for messaging, and a task lifecycle model for tracking work. Google released it in April 2025 with over 50 partners, and it's now maintained under the Linux Foundation.

How does A2A work for AI agents?

A2A works in three phases. First, a client agent reads a remote agent's card (published at a well-known URL) to learn its capabilities. Second, the client sends a task request via JSON-RPC over HTTPS. Third, the remote agent processes the task, optionally streaming progress updates via Server-Sent Events, and returns artifacts when complete. Agents can also have multi-turn conversations within a single task when the remote agent needs more input.

Is A2A open source?

Yes. A2A is released under the Apache license and maintained as an open-source project under the Linux Foundation. Google donated the protocol after developing it initially. The specification, SDKs for Python, JavaScript, Java, C#, and Go, plus reference implementations are all publicly available on GitHub.

How do I implement A2A in my agent system?

Start with the official Python SDK (pip install a2a-sdk). Define your agent's skills and publish an agent card at /.well-known/agent.json. The SDK handles JSON-RPC routing and task state management. For the data layer, pair A2A with a persistent storage service so agents can share files by reference instead of passing raw content through messages.

What's the difference between A2A and MCP?

MCP (Model Context Protocol) connects an AI agent to tools and data sources like file systems, databases, and APIs. A2A connects agents to each other for collaboration and task delegation. MCP gives agents the ability to act on external resources. A2A gives them the ability to coordinate with other agents. Most production multi-agent systems use both protocols together.

Related Resources

Fast.io features

Give Your AI Agents Persistent Storage

Fast.io gives AI agents their own cloud storage accounts with 50 GB free. Store files, share between agents, and hand off to humans when the work is done. No credit card required.