How to Get Started with the OpenClaw Python SDK
The OpenClaw Python SDK lets you manage agents, workspaces, and tools from Python code instead of the CLI or chat interface. This guide covers installation, authentication, and building a basic agent with persistent storage. Fast.io is used as the storage backend in the examples, but the SDK works with other backends too.
What the Python SDK is for
The CLI and chat interface work well for manual tasks. The Python SDK is for when you want to integrate OpenClaw into application code — trigger agents from a webhook, spin up workspaces based on user input, or wire agent output into an existing Django or FastAPI app.
The SDK gives you direct access to OpenClaw's agent management and tool execution without writing raw API calls. You can create agents, assign storage backends, run tasks, and handle responses in code.
A few common use cases:
- Trigger agents from database events or incoming webhooks
- Create isolated workspaces per user or per job in a multi-tenant app
- Build agent pipelines where one agent's output feeds another's input
- Embed agent responses into an existing web application
The examples below use Fast.io as the storage backend, since it integrates with OpenClaw via MCP and has a free tier for development. If you're using a different storage backend (S3, a shared filesystem, etc.), the agent management parts of the SDK stay the same — you'd swap out the workspace and file operations.
Installation and setup
You need Python 3.9 or higher. Install the SDK with pip:
pip install openclaw
For storage operations with Fast.io, you'll also need a Fast.io account. Generate an API key from the Fast.io dashboard under Settings > API Keys.
Set your credentials as environment variables:
export OPENCLAW_API_KEY="your_openclaw_key"
export FASTIO_API_KEY="your_fastio_key"
Verify the install:
import openclaw
print(openclaw.__version__)
If that prints a version number, you're ready to go.
Note: The OpenClaw SDK is in active development. Check the changelog before upgrading in production — the API surface has been evolving.
Initializing the client and creating an agent
The OpenClawClient is the main entry point. It reads credentials from your environment variables automatically.
from openclaw import OpenClawClient
client = OpenClawClient()
agent = client.agents.create(
name="DataAnalyst_01",
model="claude-3-5-sonnet",
description="Analyzes financial reports.",
tools=["calculator", "file_search", "python_interpreter"]
)
print(f"Agent created with ID: {agent.id}")
The tools list determines what the agent can call. These map to MCP tools registered in your OpenClaw instance. The agent ID is what you pass to subsequent calls for task execution, storage mounting, and log retrieval.
If you're not using the Fast.io skill, you can still create agents and assign whatever tools your OpenClaw instance has registered.
Connecting persistent storage
Without a persistent storage backend, agents lose context between runs. This section uses Fast.io workspaces as the storage layer, but the concept applies to any backend you've configured.
# Create a dedicated workspace
workspace = client.workspaces.create(
name="Financial_Reports_2026",
intelligence_mode=True
)
# Mount it to the agent
agent.mount_workspace(workspace.id)
# Upload a file for the agent to work with
client.files.upload(
file_path="./data/q1_report.pdf",
workspace_id=workspace.id
)
Setting intelligence_mode=True enables automatic RAG indexing on Fast.io's side. Uploaded documents get indexed, and the agent can query them with natural language. If you're using a different backend, you'd handle indexing separately.
The workspace ID persists across agent runs, so the agent can reference files from previous sessions.
Add Fast.io storage to your OpenClaw agents
Free agent tier with 50GB storage, built-in RAG, and file locks. No credit card required.
Running tasks and handling responses
Once an agent has storage mounted, you can send it tasks:
task = "Analyze the Q1 report and summarize the key revenue drivers."
execution = agent.run(task)
print(execution.output)
for step in execution.steps:
print(f"Thought: {step.thought}")
print(f"Action: {step.tool_call}")
agent.run() blocks until the task completes. For anything that might take a while, use agent.run_async() instead — it returns a job ID you can poll:
job = agent.run_async(task)
# Later:
result = client.jobs.get(job.id)
print(result.output)
The execution.steps list shows the agent's reasoning chain — which tools it called, in what order, and why. This is useful for debugging when an agent produces an unexpected result.
Webhooks and multi-agent file locks
For production use, you'll usually want webhooks instead of polling, and file locks if multiple agents share the same workspace.
Webhooks notify your application when an agent finishes, fails, or needs human input:
client.webhooks.create(
url="https://api.yourdomain.com/callbacks/agent-events",
events=["task.completed", "task.failed", "human.required"]
)
File locks prevent race conditions when two agents might write the same file:
lock = client.locks.acquire(
resource=f"workspaces/{workspace.id}/financial_summary.md",
ttl=300 # seconds
)
try:
agent.run("Update the financial summary with Q2 data.")
finally:
lock.release()
The lock is tied to a specific resource path. Any other agent trying to acquire a lock on the same path will get a "resource busy" response until you release it. The ttl parameter is a safety net — if the process crashes before lock.release(), the lock expires automatically after 5 minutes.
Frequently Asked Questions
How much does the OpenClaw Python SDK cost?
The SDK is free and open source. If you use Fast.io as your storage backend, it has a free agent tier with 5,000 credits/month. Other storage backends have their own pricing.
Can I use local LLMs with the SDK?
Yes. You can point the model endpoint at a local inference server (e.g. Ollama) or any OpenAI-compatible API. The SDK doesn't require a specific cloud provider.
What is the maximum file size for uploads?
Fast.io supports files up to 1GB per upload. The SDK handles chunked uploads transparently for large files. Other storage backends have their own limits.
Does it support streaming responses?
Yes. Use `agent.run_stream()` to receive tokens as they're generated, which is useful for building chat interfaces or progressive output displays.
How do I update the SDK?
Run `pip install --upgrade openclaw`. Check the changelog before upgrading in production — the SDK is actively developed and interfaces may change.
Related Resources
Add Fast.io storage to your OpenClaw agents
Free agent tier with 50GB storage, built-in RAG, and file locks. No credit card required.