AI & Agents

MCP Server Tutorial: Build a File Storage Server Step-by-Step

Building an MCP server gives AI assistants like Claude and Cursor access to your files and tools. This tutorial walks you through creating a file storage MCP server from scratch: project setup, tool definitions, storage integration, and testing with Claude Desktop. This guide covers mcp server tutorial with practical examples.

Fast.io Editorial Team 15 min read
Code editor showing MCP server implementation with file storage tools

What is an MCP Server?: mcp server tutorial

A Model Context Protocol (MCP) server is a standardized way to expose tools and resources to AI assistants. The protocol defines how AI applications like Claude Desktop, Cursor, or custom agents can access external data sources, execute functions, and read resources through a common interface. An MCP server acts as a bridge. AI assistants on one side need to interact with files, databases, or APIs. The actual systems that store data or perform actions are on the other side. The MCP server sits in the middle, translating AI requests into actual operations. File storage is the most requested MCP capability in 2024, with search volume for "mcp file server" growing 500% since the protocol launched. Developers want AI assistants that can read files, write outputs, organize documents, and manage workspaces without manual uploads and downloads. An MCP server exposes three core primitives:

  • Tools: Functions the AI can call (read_file, write_file, list_directory)
  • Resources: Data the AI can access (file contents, directory listings)
  • Prompts: Reusable templates for common workflows

Why Build Your Own MCP Server?

The official MCP GitHub repository includes basic filesystem examples, but these are limited to local directory access. Building a custom MCP server lets you connect AI assistants to cloud storage, implement access controls, add collaboration features, and works alongside existing systems. Real-world file sharing needs go beyond reading local files. Teams need branded portals for clients, permission management, audit logs, version control, and streaming for large media files. A custom MCP server can expose these capabilities to any MCP-compatible AI assistant. Fast.io's MCP server demonstrates this with 251 tools covering uploads, downloads, workspace management, branded sharing, webhooks, and built-in RAG (Retrieval-Augmented Generation). Most developers build a basic MCP server in 2-4 hours, but adding production features like authentication, rate limiting, and error handling takes additional time.

AI agent accessing cloud storage through MCP server interface

Prerequisites and Setup

Before building your MCP server, install the necessary tools and create your project structure.

Required Software

You'll need Node.js 18+ or Python 3.9+ depending on your language choice. The official MCP SDK supports both TypeScript and Python, with Go and Java community implementations available. For this tutorial, we'll use TypeScript with the @modelcontextprotocol/sdk package, which provides the most comprehensive documentation and examples.

Project Setup

Create a new directory and initialize the project:

mkdir file-storage-mcp
cd file-storage-mcp
npm init -y
npm install @modelcontextprotocol/sdk
npm install --save-dev typescript @types/node

Create a tsconfig.json for TypeScript configuration:

{
  "compilerOptions": {
    "target": "ES2022",
    "module": "Node16",
    "moduleResolution": "Node16",
    "outDir": "./build",
    "rootDir": "./src",
    "strict": true,
    "esModuleInterop": true
  }
}

Your project structure should look like:

file-storage-mcp/
├── src/
│   └── index.ts
├── package.json
└── tsconfig.json

Implementing Core MCP Server Structure

Every MCP server follows the same basic pattern: initialize the server, define tools, handle requests, and manage the transport layer.

Server Initialization

Create src/index.ts with the basic server setup:

import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
  CallToolRequestSchema,
  ListToolsRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";

const server = new Server(
  {
    name: "file-storage-mcp",
    version: "1.0.0",
  },
  {
    capabilities: {
      tools: {},
    },
  }
);

This creates an MCP server instance with a name, version, and capabilities declaration. The capabilities object tells AI clients what features your server supports.

Transport Layer

MCP servers communicate via transport layers. The most common is stdio (standard input/output), which works with Claude Desktop and most MCP clients:

async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
  console.error("File Storage MCP server running on stdio");
}

main().catch((error) => {
  console.error("Server error:", error);
  process.exit(1);
});

Production servers can also use Streamable HTTP (SSE) for web-based clients. Fast.io's MCP server at https://mcp.fast.io supports both stdio and Streamable HTTP to work with any MCP client.

Defining File Storage Tools

Tools are the functions your MCP server exposes to AI assistants. Each tool needs a schema (what parameters it accepts) and a handler (what it does when called).

List Tools Handler

Tell clients what tools are available:

server.setRequestHandler(ListToolsRequestSchema, async () => {
  return {
    tools: [
      {
        name: "read_file",
        description: "Read contents of a file",
        inputSchema: {
          type: "object",
          properties: {
            path: {
              type: "string",
              description: "File path to read",
            },
          },
          required: ["path"],
        },
      },
      {
        name: "write_file",
        description: "Write content to a file",
        inputSchema: {
          type: "object",
          properties: {
            path: {
              type: "string",
              description: "File path to write",
            },
            content: {
              type: "string",
              description: "Content to write",
            },
          },
          required: ["path", "content"],
        },
      },
      {
        name: "list_directory",
        description: "List files in a directory",
        inputSchema: {
          type: "object",
          properties: {
            path: {
              type: "string",
              description: "Directory path",
            },
          },
          required: ["path"],
        },
      },
    ],
  };
});

Each tool schema uses JSON Schema format to define parameters. The AI assistant uses these schemas to understand what arguments to pass when calling tools.

Call Tool Handler

Implement the actual tool logic:

import * as fs from "fs/promises";
import * as path from "path";

server.setRequestHandler(CallToolRequestSchema, async (request) => {
  const { name, arguments: args } = request.params;

  try {
    if (name === "read_file") {
      const content = await fs.readFile(args.path as string, "utf-8");
      return {
        content: [
          {
            type: "text",
            text: content,
          },
        ],
      };
    }

    if (name === "write_file") {
      await fs.writeFile(
        args.path as string,
        args.content as string,
        "utf-8"
      );
      return {
        content: [
          {
            type: "text",
            text: `Successfully wrote to ${args.path}`,
          },
        ],
      };
    }

    if (name === "list_directory") {
      const files = await fs.readdir(args.path as string);
      return {
        content: [
          {
            type: "text",
            text: JSON.stringify(files, null, 2),
          },
        ],
      };
    }

    throw new Error(`Unknown tool: ${name}`);
  } catch (error) {
    return {
      content: [
        {
          type: "text",
          text: `Error: ${error.message}`,
        },
      ],
      isError: true,
    };
  }
});

This basic implementation uses Node.js filesystem APIs. For cloud storage backends, you'd replace these with SDK calls to services like AWS S3, Google Cloud Storage, or Fast.io's API.

Integrating Cloud Storage Backend

Local filesystem access works for development, but production MCP servers need cloud storage with features like permissions, versioning, and sharing.

Connecting to Fast.io Storage API

Fast.io provides a REST API designed for AI agents, with a free tier offering 50GB storage and 5,000 credits per month. The API supports chunked uploads up to 1GB, workspace management, and branded sharing. Install the HTTP client:

npm install axios

Create an API wrapper:

import axios from 'axios';

class FastIOStorage {
  private apiKey: string;
  private baseURL = 'https://api.fast.io/v1';

  constructor(apiKey: string) {
    this.apiKey = apiKey;
  }

  async uploadFile(path: string, content: Buffer) {
    const response = await axios.post(
      `${this.baseURL}/files`,
      {
        path,
        content: content.toString('base64'),
      },
      {
        headers: {
          'Authorization': `Bearer ${this.apiKey}`,
          'Content-Type': 'application/json',
        },
      }
    );
    return response.data;
  }

  async downloadFile(fileId: string) {
    const response = await axios.get(
      `${this.baseURL}/files/${fileId}`,
      {
        headers: { 'Authorization': `Bearer ${this.apiKey}` },
      }
    );
    return response.data;
  }

  async listWorkspace(workspaceId: string) {
    const response = await axios.get(
      `${this.baseURL}/workspaces/${workspaceId}/files`,
      {
        headers: { 'Authorization': `Bearer ${this.apiKey}` },
      }
    );
    return response.data;
  }
}

Update your tool handlers to use the cloud storage backend instead of local filesystem calls.

Adding Workspace Tools

AI agents benefit from organized storage. Add workspace management tools:

{
  name: "create_workspace",
  description: "Create a new workspace for organizing files",
  inputSchema: {
    type: "object",
    properties: {
      name: {
        type: "string",
        description: "Workspace name",
      },
    },
    required: ["name"],
  },
}

This lets AI assistants create project-specific storage areas, invite collaborators, and organize files by client or use case.

Implementing RAG and Intelligent Search

Basic file storage works, but intelligent search changes how AI assistants interact with documents.

Intelligence Mode Integration

Fast.io's Intelligence Mode auto-indexes workspace files for semantic search and RAG. When enabled on a workspace, files are automatically processed, embedded, and made searchable via natural language. Add a query tool:

{
  name: "query_workspace",
  description: "Ask questions across workspace files using AI",
  inputSchema: {
    type: "object",
    properties: {
      workspace_id: {
        type: "string",
        description: "Workspace to query",
      },
      question: {
        type: "string",
        description: "Question to ask",
      },
    },
    required: ["workspace_id", "question"],
  },
}

The handler calls Fast.io's RAG endpoint, which returns answers with citations:

async queryWorkspace(workspaceId: string, question: string) {
  const response = await axios.post(
    `${this.baseURL}/workspaces/${workspaceId}/query`,
    { question },
    { headers: { 'Authorization': `Bearer ${this.apiKey}` } }
  );
  return {
    answer: response.data.answer,
    sources: response.data.sources,
  };
}

This gives your MCP server built-in document understanding without managing vector databases or embedding pipelines.

Semantic Search Tool

Add natural language file search:

{
  name: "search_files",
  description: "Search files by meaning, not just keywords",
  inputSchema: {
    type: "object",
    properties: {
      workspace_id: { type: "string" },
      query: { type: "string" },
    },
    required: ["workspace_id", "query"],
  },
}

AI assistants can ask "Show me the contract with Acme from Q3" and get relevant results even if those exact words aren't in filenames.

AI-powered semantic search interface showing document results

Adding Collaboration and Sharing Features

Enterprise MCP servers need more than file operations. Teams require secure sharing, branded portals, and permission management.

Create Branded Share Tool

Fast.io's Send/Receive/Exchange shares create branded client portals:

{
  name: "create_share",
  description: "Create a branded share link for files",
  inputSchema: {
    type: "object",
    properties: {
      file_ids: {
        type: "array",
        items: { type: "string" },
        description: "Files to share",
      },
      share_type: {
        type: "string",
        enum: ["send", "receive", "exchange"],
        description: "send (delivery), receive (upload portal), exchange (both)",
      },
      options: {
        type: "object",
        properties: {
          password: { type: "string" },
          expires_at: { type: "string" },
          allow_download: { type: "boolean" },
        },
      },
    },
    required: ["file_ids", "share_type"],
  },
}

This lets AI agents build client portals, handle file requests, and manage secure deliveries.

Webhooks for Reactive Workflows

Add webhook registration so agents can respond to file events:

{
  name: "create_webhook",
  description: "Get notified when files change",
  inputSchema: {
    type: "object",
    properties: {
      workspace_id: { type: "string" },
      events: {
        type: "array",
        items: { type: "string" },
        description: "Events to watch: file.uploaded, file.modified, etc.",
      },
      callback_url: { type: "string" },
    },
    required: ["workspace_id", "events", "callback_url"],
  },
}

Webhooks allow reactive agent workflows without polling.

Testing Your MCP Server with Claude Desktop

Once your MCP server is built, connect it to Claude Desktop to test the tools.

Build and Configure

Compile your TypeScript server:

npm run build

Edit Claude Desktop's MCP settings file (location varies by OS):

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json

Add your server:

{
  "mcpServers": {
    "file-storage": {
      "command": "node",
      "args": ["/absolute/path/to/file-storage-mcp/build/index.js"]
    }
  }
}

Restart Claude Desktop. Your MCP server tools should appear in the available tools list.

Test Workflows

Try these prompts to verify your tools work:

  • "Read the file at ./notes.txt"
  • "List all files in my Documents folder"
  • "Create a workspace called 'Q1 Reports'"
  • "Upload this data to the workspace"
  • "Search the workspace for mentions of revenue targets"
  • "Create a share link for these files with password protection"

Claude will call your MCP tools to fulfill these requests. Check the server logs to see tool invocations.

Debugging Common Issues

If tools don't appear, check:

  • Server process starts without errors (console.error logs in terminal)
  • Config file path is absolute, not relative
  • JSON syntax is valid (no trailing commas)
  • Node.js version is 18 or higher

Use console.error() for logging since MCP uses stdout for protocol communication.

Production Deployment Considerations

Development MCP servers run locally via stdio. Production servers need authentication, rate limiting, error handling, and monitoring.

Streamable HTTP Transport

For web-based clients or hosted deployments, add HTTP/SSE transport:

import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";

app.post('/mcp', async (req, res) => {
  const transport = new SSEServerTransport('/mcp/message', res);
  await server.connect(transport);
});

Fast.io's MCP server at https://mcp.fast.io uses Streamable HTTP so any client can connect via HTTPS without stdio.

Authentication

Add API key validation:

server.setRequestHandler(CallToolRequestSchema, async (request, extra) => {
  const apiKey = extra?.meta?.apiKey;
  if (!apiKey || !await validateKey(apiKey)) {
    throw new Error("Invalid API key");
  }
  // ... handle tool call
});

Rate Limiting

Prevent abuse with per-key rate limits:

import rateLimit from 'express-rate-limit';

const limiter = rateLimit({
  windowMs: 60 * 1000,
  max: 100,
  keyGenerator: (req) => req.headers['x-api-key'],
});

app.use('/mcp', limiter);

Error Handling

Return structured errors:

catch (error) {
  return {
    content: [{
      type: "text",
      text: JSON.stringify({
        error: error.message,
        code: error.code || "INTERNAL_ERROR",
        details: error.details || null,
      }),
    }],
    isError: true,
  };
}

Monitoring and Observability

Log tool calls, response times, and errors:

server.setRequestHandler(CallToolRequestSchema, async (request) => {
  const start = Date.now();
  const { name } = request.params;

  try {
    const result = await handleTool(request);
    logger.info({ tool: name, duration: Date.now() - start });
    return result;
  } catch (error) {
    logger.error({ tool: name, error: error.message });
    throw error;
  }
});

Next Steps and Resources

You've built a working MCP server with file storage, cloud integration, and AI-powered search. Here's how to expand it further.

Add More Tools

Fast.io's MCP server includes 251 tools covering:

  • File locks (prevent concurrent edit conflicts)
  • Ownership transfer (agent builds, transfers to human)
  • URL import (pull from Google Drive, Dropbox, etc.)
  • Comment threads (contextual discussions on files)
  • Activity logs (audit trail of all actions)

Each additional tool follows the same pattern: define schema, implement handler, test with Claude.

Multi-LLM Support

MCP works with any compatible client. Test your server with:

  • Claude Desktop (stdio)
  • Cursor IDE (stdio)
  • VS Code with MCP extension (stdio)
  • Custom agents (HTTP/SSE)

The protocol is LLM-agnostic, so your server works with GPT-4, Gemini, LLaMA, or local models.

OpenClaw Integration

For zero-config deployment, publish your MCP server as an OpenClaw skill on ClawHub. Fast.io's skill (clawhub install dbalve/fast-io) provides 14 file management tools with no configuration files or environment variables needed.

Community Resources

Resources for deeper MCP development:

  • Official MCP documentation: modelcontextprotocol.io
  • GitHub examples: github.com/modelcontextprotocol/servers
  • Fast.io MCP server docs: mcp.fast.io/skill.md
  • Community Discord: Model Context Protocol server

The MCP ecosystem grows daily, with new servers, clients, and tools launching regularly.

Frequently Asked Questions

What language should I use to build an MCP server?

TypeScript and Python have official SDKs from the Model Context Protocol team, with the most comprehensive documentation and examples. TypeScript is recommended for production servers due to strong typing and npm ecosystem. Go and Java have community-maintained implementations if you prefer those languages. All languages can implement the same MCP protocol and work with any MCP client.

How do I connect Claude Desktop to my custom MCP server?

Edit Claude Desktop's config file (~/Library/Application Support/Claude/claude_desktop_config.json on macOS or %APPDATA%\Claude\claude_desktop_config.json on Windows) and add your server under mcpServers with the command and absolute path to your server script. Restart Claude Desktop, and your tools will appear in the available tools list. Use console.error for logging since MCP uses stdout for protocol communication.

Can I use my MCP server with other AI assistants besides Claude?

Yes, MCP is an open protocol that works with any compatible client. Cursor IDE, VS Code with MCP extensions, and custom agents can all connect to your MCP server. The protocol is LLM-agnostic, so the same server works with GPT-4, Gemini, LLaMA, or local models without changes.

What's the difference between MCP tools and resources?

Tools are functions the AI can call to perform actions (write_file, create_workspace, send_email). Resources are data the AI can read (file contents, directory listings, database records). Tools have side effects and change state. Resources are read-only and provide context. A complete MCP server typically exposes both tools for actions and resources for data access.

How do I handle authentication in a production MCP server?

For stdio transport (local clients like Claude Desktop), authentication isn't needed since the client runs on the user's machine. For HTTP/SSE transport (web clients or hosted servers), implement API key validation in your request handlers. Check for valid keys before processing tool calls, and return structured errors for invalid credentials. Use environment variables or secure key management systems to store API keys.

Can my MCP server access files stored in cloud services like Google Drive?

Yes, by implementing tools that call the cloud provider's API. For example, Fast.io's URL Import feature pulls files from Google Drive, OneDrive, Box, and Dropbox via OAuth without requiring local downloads. Your MCP server works as a bridge, translating AI tool calls into cloud API requests. This requires API credentials and OAuth flow implementation for user authorization.

How many tools can an MCP server expose?

There's no hard limit on the number of tools. Fast.io's MCP server exposes 251 tools covering file operations, workspace management, sharing, webhooks, and RAG. However, AI assistants work best with focused tool sets grouped by capability. Consider creating multiple specialized MCP servers (one for files, one for databases, one for APIs) rather than one mega-server with hundreds of unrelated tools.

What's the average time to build a basic MCP server?

A basic MCP server with 3-5 simple tools takes 2-4 hours for developers familiar with TypeScript or Python. This includes project setup, tool definitions, and basic testing. Adding production features like cloud storage integration, authentication, rate limiting, error handling, and monitoring can add several more days depending on requirements.

Do I need to manage a vector database if I want RAG in my MCP server?

Not if you use a service with built-in RAG like Fast.io's Intelligence Mode. When enabled on a workspace, files are automatically indexed, embedded, and made queryable via natural language with no vector database management required. If building RAG from scratch, you'll need a vector database (Pinecone, Weaviate, etc.), embedding generation, and chunking logic.

Can multiple AI agents use the same MCP server simultaneously?

Yes, MCP servers support concurrent connections. For stdio transport, each client connection gets its own process. For HTTP/SSE transport, multiple clients can connect to the same server instance. Implement file locks or transaction handling if tools modify shared state to prevent concurrent edit conflicts. Fast.io's file lock tools let agents acquire and release locks for safe concurrent access.

Related Resources

Fast.io features

Skip Building, Use Fast.io's Production-Ready MCP Server for mcp server tutorial

Fast.io provides 251 MCP tools for file storage, workspaces, sharing, and RAG out of the box. Free tier includes 50GB storage, no credit card required. Connect Claude, Cursor, or any MCP client in minutes.