AI & Agents

How to Build a Traffic Counting Station with OpenClaw on Raspberry Pi

A traffic counting agent uses a Raspberry Pi camera with computer vision to detect, classify, and count vehicles on a road, while an OpenClaw agent aggregates data into daily reports, detects unusual patterns, and syncs results to cloud storage. This guide covers hardware selection, detection model setup, counting logic, and cloud reporting so your station runs unattended at the roadside.

Fast.io Editorial Team 14 min read
AI agent processing and sharing traffic data through a cloud workspace

Why Build Your Own Traffic Counter

Counting vehicles on a road is straightforward in concept. In practice, the options are either expensive or incomplete.

Professional traffic studies run $2,500 to $5,000 per intersection, according to Greenlight Traffic Engineering. That buys you a few days of pneumatic tube data or a crew with manual clickers, plus a report that arrives weeks later. Municipal agencies pay these rates routinely, but neighborhood groups, small businesses, and independent researchers rarely have that budget for a question like "how many trucks use this street during school hours?"

DIY alternatives exist, but most stop at counting. GitHub repositories like Qengineering's Traffic-Counter-RPi provide a functional pipeline: camera input, YOLOv2 detection, BYTETracker for object tracking, and a count that increments when a vehicle crosses a virtual line. The counting works. What is missing is everything that happens after: daily and weekly reports, vehicle classification breakdowns, anomaly detection when traffic patterns shift, and cloud access so you can check results without visiting the installation.

This is where OpenClaw changes the picture. OpenClaw is an open-source AI agent framework that runs on Raspberry Pi. Instead of just executing a detection script, it adds a decision layer that manages the full lifecycle of traffic data. The agent captures video frames, runs them through a detection model, tracks and counts vehicles, classifies them by type, generates structured reports, flags unusual patterns, and syncs everything to cloud storage where the data becomes searchable and shareable.

The hardware cost for a complete station sits under $200. A Raspberry Pi 5, camera module, weatherproof enclosure, and power supply are the core components. Compare that to a MetroCount pneumatic tube counter that starts around $1,500 for the hardware alone, or a Miovision Scout camera unit at several thousand dollars per deployment.

Research backs the accuracy. A 2020 IEEE paper by Kulkarni and Baligar demonstrated that Raspberry Pi-based vehicle detection achieved over 93% accuracy for real-time counting on highway video. A separate freeway traffic study correctly classified 52 out of 52 vehicles for 100% accuracy in controlled conditions. With proper camera placement and model tuning, a Pi-based counter performs within striking distance of commercial equipment at a fraction of the cost.

Hardware for Roadside Vehicle Counting

The parts list builds on the same Raspberry Pi camera platform used for wildlife monitoring and security applications, with adjustments for outdoor roadside mounting.

Core components:

  • Raspberry Pi 5 (8 GB RAM) for smooth inference, or Raspberry Pi 4 (4 GB minimum)
  • Raspberry Pi Camera Module 3 (12 MP Sony IMX708 sensor, phase-detection autofocus)
  • High-endurance microSD card, 32 GB or larger (Samsung PRO Endurance or SanDisk MAX Endurance)
  • Weatherproof IP65-rated enclosure
  • USB-C power supply rated at 5V/5A for Pi 5
  • Mounting hardware: pole clamp, angle bracket, or fence mount

Optional additions:

  • Hailo-8L AI HAT for hardware-accelerated inference at 13 TOPS
  • NoIR Camera Module 3 with IR LED array for 24-hour operation
  • 4G/LTE HAT for cellular connectivity at locations without Wi-Fi
  • External USB SSD to replace microSD for longer deployments
  • Solar panel (20W minimum) with USB power bank for locations without mains power

Camera placement matters more than camera quality. The Camera Module 3's 66-degree horizontal field of view works well when the camera is positioned 3 to 5 meters above road level, angled 30 to 45 degrees downward. This perspective separates vehicles and minimizes occlusion where one vehicle hides behind another. Avoid aiming the camera directly along the road (vehicles stack up and overlap) or perpendicular to traffic (vehicles pass through the frame too quickly for reliable detection).

Mount the camera on a light pole, fence post, or building overhang with a clear view of at least 20 meters of road. The detection model needs enough pixels on each vehicle to distinguish cars from trucks from motorcycles. At 5 meters elevation with the standard lens, a vehicle fills roughly 100 to 200 pixels wide, which is comfortable for classification.

For 24-hour counting, the NoIR camera variant with an IR LED array captures usable images after dark. Daytime images will have a slight color cast that does not affect detection accuracy, since the model works on shape and size rather than color. If color accuracy matters for your reports, a dual-camera setup with a light-sensor switch between standard and NoIR modules is more reliable than a single compromise.

Total hardware cost:

A Pi 5 (8 GB) runs about $80. The Camera Module 3 is $25 to $35. A weatherproof enclosure costs $15 to $25. Power supply, SD card, cables, and mounting hardware add another $30 to $50. The complete station comes in between $150 and $190 depending on what you already own.

Edge computing device connected to a detection and processing pipeline

Installing OpenClaw and the Detection Pipeline

Start with a fresh Raspberry Pi OS (64-bit) install. Use the Raspberry Pi Imager to flash the OS, enable SSH, and configure your network connection so you can work headlessly.

OpenClaw installs with a single command:

curl -fsSL https://openclaw.ai/install.sh | bash

After installation, run the onboarding wizard to configure your LLM backend. OpenClaw uses a cloud LLM (such as Anthropic Claude or OpenAI) for the agent's reasoning and decision-making layer. The actual vehicle detection runs on a separate, dedicated computer vision model that processes frames locally.

The detection pipeline has three components that work together:

1. Object detection model

YOLOv8n (the nano variant) or a MobileNetV2 SSD trained on the COCO dataset provides vehicle detection out of the box. COCO includes classes for car, truck, bus, motorcycle, and bicycle. YOLOv8n runs at roughly 5 to 8 FPS on a Pi 5 without hardware acceleration. With a Hailo-8L AI HAT, inference drops below 50 ms per frame and frees the CPU for the agent loop.

TensorFlow Lite is the runtime of choice for Pi deployments. Convert your YOLO or MobileNet model to TFLite format for optimized ARM inference. Pre-converted models are available from the TensorFlow Model Zoo and community repositories.

2. Object tracking

Detection alone tells you what is in a single frame. Tracking connects detections across consecutive frames to follow individual vehicles as they move through the scene. BYTETracker and DeepSORT are both proven trackers that run efficiently on Pi hardware. BYTETracker is lighter weight and works well for traffic counting where vehicles move predictably in lanes.

3. Counting logic

Draw a virtual line across the road in your camera's field of view. When a tracked vehicle's centroid crosses this line, the count increments. This approach prevents double-counting (the same vehicle detected in multiple frames) and handles vehicles that stop and restart in traffic. The line position is configurable: set it at the midpoint of the visible road segment where vehicles are most likely to be fully visible and separated.

The OpenClaw agent orchestrates this pipeline. It starts the camera feed, runs the detection model on each frame, passes detections to the tracker, evaluates line crossings, and logs each counted vehicle with its class, timestamp, direction of travel, and a thumbnail image. The agent runs this loop continuously, handling the tedious parts that a raw script would leave to you: restarting after errors, managing memory on long runs, rotating log files, and generating periodic summaries.

Fastio features

Store and Search Your Traffic Data from Anywhere

Fast.io gives your OpenClaw traffic counter 50 GB of free cloud storage with built-in AI search. Upload daily reports, query traffic patterns by date or vehicle type, and share results with city planners or neighborhood groups. No credit card required.

Vehicle Classification and Counting Accuracy

A raw vehicle count is useful. A count broken down by vehicle type is far more useful. Traffic engineers, urban planners, and neighborhood groups all want to know not just how many vehicles, but what kind.

COCO-based classification:

Models trained on the COCO dataset distinguish between car, truck, bus, motorcycle, and bicycle. This covers the most common classification needs. For finer distinctions (separating pickup trucks from semi-trailers, or identifying emergency vehicles), you would need a model trained on a more granular vehicle dataset. Custom training on a YOLO backbone using 200 to 500 labeled images per class produces a workable classifier for specialized categories.

Accuracy expectations:

Published research on Pi-based vehicle detection consistently shows strong results. Kulkarni and Baligar's 2020 IEEE study achieved over 93% accuracy for real-time counting on highway footage using a Raspberry Pi with a camera module. Other implementations using MobileNet-SSD report 85 to 95% accuracy depending on camera angle, lighting, and traffic density.

Several factors affect accuracy in practice:

  • Camera height and angle are the biggest variables. Too low and vehicles overlap. Too high and small vehicles (motorcycles, bicycles) lose detail.
  • Lighting changes between dawn, midday, and dusk cause detection confidence to fluctuate. The model handles gradual changes well, but sudden shifts from cloud shadows can briefly reduce accuracy.
  • Heavy traffic with closely spaced vehicles challenges the tracker more than the detector. Vehicles that enter the frame already overlapping may be counted as one. Higher mounting positions reduce this problem.
  • Weather affects optical detection. Rain on the lens, fog, and heavy snow reduce visibility. A recessed lens mount with a small visor prevents most rain interference. Fog and snow require either lower confidence thresholds (accepting more false positives) or supplementary sensors.

Improving accuracy:

Record a few hours of video from your specific location and run the detection model offline before deploying. Review the results to identify where misdetections cluster. Adjust the virtual counting line to a position where vehicles are most cleanly separated. If the model struggles with a specific vehicle type at your location, collect labeled examples and fine-tune the model with transfer learning. Even 100 labeled images from your exact camera angle can meaningfully improve per-class accuracy.

The OpenClaw agent tracks accuracy metrics over time. It logs the total detections, counted crossings, classification confidence scores, and any anomalies (detections without crossings, or crossings without preceding detections). These metrics help you diagnose problems and tune the system without guessing.

AI-powered analysis dashboard showing indexed and summarized data

Reporting, Anomaly Detection, and Cloud Sync

This is where the OpenClaw agent separates a Pi traffic counter from a bare detection script. GitHub projects count vehicles. The agent turns those counts into actionable information.

Structured reporting:

The agent compiles counts into daily and weekly reports: total vehicles per hour, peak traffic periods, vehicle type breakdown, directional split (if counting both directions), and trend comparisons against previous days. These reports are structured data, not raw logs. They are immediately useful for neighborhood traffic petitions, parking lot planning, or delivery window optimization.

Anomaly detection:

The agent watches for deviations from established patterns. If Tuesday morning traffic is typically 200 vehicles per hour and the count jumps to 400, that is worth flagging. If truck traffic doubles on a residential street during school hours, that is worth knowing. The agent compares current counts against rolling averages and generates alerts when traffic departs from the baseline by a configurable threshold.

Cloud sync options:

Local storage works for short-term deployments. The Pi writes reports and thumbnails to the microSD card or an attached SSD, and you pull the data when you visit the station. For ongoing monitoring, cloud sync gives you remote access to results.

Several cloud storage options work with a Pi-based counter:

  • S3-compatible storage with rclone handles bulk uploads and is a solid default for developers
  • Google Drive works for personal projects but API rate limits complicate automated workflows
  • Fast.io adds an intelligence layer that makes traffic data searchable and shareable

Fast.io fits this use case particularly well. The OpenClaw agent uploads reports and detection thumbnails to a Fast.io workspace through the MCP server at /mcp. The free agent tier includes 50 GB of storage, 5,000 monthly credits, and 5 workspaces, with no credit card and no expiration.

Intelligence Mode is what makes the data useful beyond raw files. Enable it on the workspace and uploaded reports are automatically indexed for semantic search. Instead of downloading CSVs and filtering in a spreadsheet, ask questions directly: "what was peak traffic on Main Street last Thursday?" or "show days where truck count exceeded 50." Intelligence Mode returns answers with citations pointing to the specific report files.

For sharing results with city officials, neighborhood associations, or transportation planners, Fast.io's branded sharing creates a clean link to the traffic data archive. Recipients browse reports and charts through a web interface without needing their own account. This is useful when presenting traffic data to support a speed reduction request or a new crosswalk proposal.

If you are running counters at multiple intersections, each Pi uploads to the same Fast.io workspace in location-specific folders. Intelligence Mode indexes across all locations, so queries like "which intersection has the highest evening traffic" work across the entire network.

Field Deployment and Maintenance

A traffic counting station sits outdoors at a roadside, exposed to weather, vibration, and the occasional curious passerby. Plan for reliability from the start.

Enclosure and mounting:

Use an IP65-rated enclosure large enough for the Pi, camera ribbon cable, and any HATs. The camera lens window should be flush-mounted clear acrylic, angled slightly downward so rain runs off instead of pooling. A small visor above the lens prevents direct water and sun glare. Seal all cable entry points with rubber grommets.

Mount the enclosure at 3 to 5 meters height on a pole, fence, or building wall. Higher is better for reducing occlusion, but the Camera Module 3's 12 MP resolution starts to lose detail on smaller vehicles above 6 meters. Use stainless steel hose clamps or U-bolts rated for outdoor use. Check that the mount does not vibrate in wind, as vibration causes frame blur that reduces detection accuracy.

Power options:

Mains power is the simplest solution when available. Run a weatherproof USB-C cable from a nearby outlet. For locations without mains access, a 20W solar panel with a 20,000 mAh USB power bank sustains a Pi 5 through most weather. The Pi 5 draws roughly 5W under load and about 2.5W when idle between processing cycles. Add a UPS HAT for clean shutdown during power interruptions, which prevents SD card corruption.

Connectivity:

Wi-Fi works when the station is within range of a building's network. For roadside locations without Wi-Fi, a 4G/LTE HAT with a data SIM provides cellular connectivity. Data usage is modest: compressed reports and detection thumbnails consume a few hundred megabytes per month. Batch uploads every 15 to 30 minutes rather than streaming continuously to keep data costs low.

Long-term maintenance:

Clean the camera lens window monthly. Dust, pollen, and road grime accumulate and reduce image quality gradually enough that you might not notice in the data until accuracy drops. Check mounting fasteners quarterly for loosening from wind vibration. Monitor SD card health through the agent's status reports and replace the card annually for high-write deployments, or use a USB SSD from the start.

Professional counter comparison:

For context on what you are replacing, here is how a Pi+OpenClaw station compares to professional equipment:

  • Cost: Under $200 for Pi+OpenClaw vs. $1,500+ for a pneumatic tube counter or $3,000+ for a Miovision Scout camera
  • Accuracy: 85 to 95% for Pi-based detection vs. 95 to 99% for pneumatic tubes (which miss motorcycles and bicycles)
  • Classification: Pi counts by vehicle type. Pneumatic tubes count axles, not vehicle types. Camera-based commercial systems classify but cost 10x more.
  • Reporting: OpenClaw generates automated daily and weekly reports with anomaly detection. Professional counters provide raw data files that need separate analysis software.
  • Deployment duration: Pi stations run indefinitely with power and connectivity. Professional counter rentals typically cover 48 to 72 hours per deployment.
  • Remote access: Real-time cloud access through Fast.io. Most professional counters store data locally until physically retrieved.

The Pi station trades peak accuracy for continuous monitoring, automated reporting, and dramatically lower cost. For applications that need certified traffic data (court cases, highway capacity studies), professional equipment remains the standard. For neighborhood monitoring, parking lot analysis, delivery scheduling, and preliminary traffic studies, the Pi+OpenClaw solution provides better ongoing value.

Frequently Asked Questions

How do I count vehicles with Raspberry Pi?

Connect a Raspberry Pi Camera Module 3 to your Pi 5 or Pi 4, mount it 3 to 5 meters above road level angled 30 to 45 degrees downward, and run a TensorFlow Lite detection model (YOLOv8n or MobileNet-SSD) trained on the COCO dataset. Use a tracker like BYTETracker to follow vehicles across frames and count them when they cross a virtual line drawn across the road. OpenClaw manages this pipeline as an autonomous agent, handling the camera feed, detection, counting, and reporting without manual intervention.

Can Raspberry Pi do real-time object detection?

Yes. A Raspberry Pi 5 runs lightweight detection models like YOLOv8n at 5 to 8 FPS through TensorFlow Lite, which is sufficient for traffic counting on most roads. Adding a Hailo-8L AI HAT boosts performance by offloading inference to dedicated hardware with 13 TOPS of compute, pushing frame rates higher while freeing the CPU for other tasks.

What is the cheapest way to monitor traffic on a road?

A Raspberry Pi with a camera module and OpenClaw agent costs under $200 for a complete traffic counting station. This is cheaper than professional pneumatic tube counters (starting around $1,500) or camera-based commercial systems ($3,000+). The Pi solution also provides vehicle classification and automated reporting, which professional tube counters do not offer.

How accurate is computer vision vehicle counting?

Published research shows Pi-based vehicle detection achieves 85 to 95% accuracy under good conditions, with some implementations exceeding 93% on highway footage. Accuracy depends on camera placement, lighting, weather, and traffic density. Professional pneumatic tube counters achieve 95 to 99% accuracy but cannot classify vehicles by type and miss two-wheeled vehicles entirely.

Does OpenClaw run computer vision models directly?

OpenClaw is an agent framework, not a computer vision engine. It orchestrates the detection pipeline by managing the camera feed, calling a separate TensorFlow Lite or YOLO model for vehicle detection, passing results to a tracker, logging counts, generating reports, and syncing data to cloud storage. The Pi handles both OpenClaw's agent loop and the detection model without bottlenecks.

Can a Raspberry Pi traffic counter work at night?

Yes, with the NoIR Camera Module 3 variant and an IR LED array. The NoIR module removes the infrared cut filter, allowing it to capture usable images in complete darkness using infrared illumination. Detection models work on vehicle shape and size rather than color, so the infrared imagery does not reduce counting accuracy.

Related Resources

Fastio features

Store and Search Your Traffic Data from Anywhere

Fast.io gives your OpenClaw traffic counter 50 GB of free cloud storage with built-in AI search. Upload daily reports, query traffic patterns by date or vehicle type, and share results with city planners or neighborhood groups. No credit card required.