AI & Agents

How to Run MCP Servers in Docker: A Deployment Guide

Putting your MCP servers in Docker containers keeps environments consistent and makes deployment straightforward across different agent runtimes. This guide walks through the full workflow for Dockerizing Python-based MCP servers, from writing the Dockerfile to connecting via stdio and SSE.

Fast.io Editorial Team 8 min read
Abstract 3D illustration of containerized software deployment representing MCP servers in Docker

Why Run MCP Servers in Docker?

Running MCP servers directly on a host machine leads to dependency conflicts, especially when you're managing multiple agents with different Python or Node.js versions. Docker isolates the execution environment and removes this problem.

Key benefits:

  • Isolation: Dependencies stay locked inside the container. No conflicts with the host system or other servers.
  • Portability: A Dockerized MCP server runs the same way on a laptop, a cloud VM, or a Kubernetes cluster.
  • Security: The container separates the agent's code from your host filesystem.
  • Standardization: Share a Dockerfile instead of writing "how to install" docs. MCP servers communicate via standard input/output (stdio) or HTTP with Server-Sent Events (SSE). Docker supports both, though SSE tends to be easier for containerized setups.

Helpful references: Fast.io Workspaces, Fast.io Collaboration, and Fast.io AI.

Prerequisites

Before building your MCP container, ensure you have the following installed:

  • Docker Desktop (or Docker Engine on Linux)
  • Python 3.10+ (for testing the server code locally before containerizing)
  • An MCP client for testing (e.g., Claude Desktop or a custom agent)

Consider how this fits into your broader workflow and what matters most for your team. The right choice depends on your specific requirements: file types, team size, security needs, and how you collaborate with external partners. Testing with a free account is the fast way to know if a tool works for you. Consider how this fits into your broader workflow and what matters most for your team. The right choice depends on your specific requirements: file types, team size, security needs, and how you collaborate with external partners. Testing with a free account is the fast way to know if a tool works for you.

Step 1: Create a Basic Python MCP Server

First, let's create a simple MCP server that we can containerize. We'll use the official mcp Python SDK. Create a file named server.py:

from mcp.server.fastmcp import FastMCP

### Initialize the server
mcp = FastMCP("weather-agent")

@mcp.tool()
def get_weather(city: str) -> str:
    """Get the current weather for a city."""
    return f"The weather in {city} is currently sunny and 72°F."

if __name__ == "__main__":
    mcp.run()

Create a requirements.txt file to list the dependencies:

mcp[cli]>=0.1.0
uvicorn

Consider how this fits into your broader workflow and what matters most for your team. The right choice depends on your specific requirements: file types, team size, security needs, and how you collaborate with external partners. Testing with a free account is the fast way to know if a tool works for you.

Step 2: The MCP Dockerfile

Now for the Dockerfile. This installs dependencies and configures the entry point for a Python MCP server.

Dockerfile:

### Use an official Python runtime as a parent image
FROM python:3.11-slim-bookworm

### Set the working directory in the container
WORKDIR /app

### Install system dependencies if needed (e.g., for certain Python packages)
RUN apt-get update && apt-get install -y --no-install-recommends gcc && rm -rf /var/lib/apt/lists/*

### Copy requirements first to use Docker cache
COPY requirements.txt .

### Install Python dependencies
RUN pip install --no-cache-dir -r requirements.txt

### Copy the server code
COPY server.py .

### Expose the port if using SSE (optional, but good practice)
EXPOSE 8000

### Define the command to run the server
### Using 'mcp run' for stdio, or uvicorn for SSE
ENTRYPOINT ["python", "server.py"]

Step 3: Building and Running the Image

With your files in place, build the Docker image. Run this command in your terminal:

docker build -t my-mcp-server . ```

### Option A: Running in Stdio Mode
Stdio mode allows local agents (like Claude Desktop) to talk to the container as if it were a local process. You must use the `-i` flag to keep standard input open. ```bash
docker run -i --rm my-mcp-server

Note: Stdio over Docker can be tricky with buffering. Ensure your Python script flushes stdout.

Option B: Running in SSE Mode (Recommended)

SSE over HTTP works better with Docker because it uses standard networking ports. Update your server.py entry point to use an SSE-compatible runner, then map the ports:

docker run -p 8000:8000 my-mcp-server
Visualization of AI agent architecture connecting to containerized services

Orchestrating Multiple Servers with Docker Compose

Most agent setups need more than one MCP server. You might have one for database access, one for file management, and another for browser automation. Running separate docker run commands for each gets messy fast. Docker Compose lets you define all your containers in a single YAML file. It handles networking and environment variables for you, which makes it the go-to approach for local development and cloud deployments. Create a docker-compose.yml in your project root:

version: '3.8'
services:
  weather-server:
    build: . ports:
      - "8000:8000"
    environment:
      - API_KEY=${WEATHER_API_KEY}
  
  postgres-server:
    image: mcp/postgres-server:latest
    environment:
      - DATABASE_URL=postgres://user:pass@db:5432/mcp

With this in place, one command starts everything: docker-compose up. Environment variables, networking between containers, and volume mounts are all handled by the config file. Other developers on your team can clone the repo and boot the full stack without any manual setup.

Step 4: Connecting Your Agent

Once your container is running, you need to tell your MCP client where to find it.

Connecting via Claude Desktop (Stdio)

Update your claude_desktop_config.json:

{
  "mcpServers": {
    "docker-weather": {
      "command": "docker",
      "args": ["run", "-i", "--rm", "my-mcp-server"]
    }
  }
}

Connecting via HTTP/SSE

If your server exposes an SSE endpoint (e.g., at http://localhost:8000/sse), point your agent to that URL. This works better for remote deployments and multi-container setups with Docker Compose. Consider how this fits into your broader workflow and what matters most for your team. The right choice depends on your specific requirements: file types, team size, security needs, and how you collaborate with external partners. Testing with a free account is the fast way to know if a tool works for you.

Fast.io features

Give Your AI Agents Persistent Storage

Stop relying on ephemeral container storage. Get 50GB of free, persistent cloud storage built specifically for AI agents.

Managing State with Docker Volumes

MCP servers often need to persist state like user preferences or cached data. Docker containers lose their filesystem when they stop, so you need a volume. Mount one like this:

docker run -v mcp-data:/app/data my-mcp-server

For agents that need to manage large files or share content with humans, local Docker volumes have limits. Cloud storage built for agents fills this gap.

Fast.io's AI Agent Storage gives you:

  • 50GB free storage that persists across container restarts
  • Access from anywhere via API, MCP tools, or a web dashboard
  • Human handoff so agents can transfer workspaces to human users when the job is done

Instead of writing to a local /app/data volume, your Dockerized agent can use the Fast.io MCP server to read and write files to the cloud.

Diagram showing persistent storage connection for ephemeral Docker containers

Frequently Asked Questions

Can I run multiple MCP servers in one Docker container?

You can, but it's not recommended. Docker works best with one process per container for isolation and scaling. Use Docker Compose to run multiple MCP servers together instead.

How do I debug a Dockerized MCP server?

Use `docker logs <container_id>` to view stderr output. If using stdio mode, be careful that debug logs don't corrupt the JSON-RPC messages on stdout; redirect debug logs to stderr.

Is SSE or Stdio better for Docker MCP servers?

SSE (HTTP) is usually the better choice for Docker. It avoids stdin buffering issues and lets you run the server on a remote machine. Stdio works fine for local, single-machine setups.

Related Resources

Fast.io features

Give Your AI Agents Persistent Storage

Stop relying on ephemeral container storage. Get 50GB of free, persistent cloud storage built specifically for AI agents.