project · 2025-2026
TomTom Traffic Agent (CES 2026 demo)
Production multi-agent system for traffic analytics, demoed by Product Management at CES 2026 Las Vegas. Five specialised agents (routing, route monitoring, junction analytics, area analysis, traffic volumes) orchestrated through Agno's Team abstraction, served over AG-UI v1.0 with persistent chat memory.
A multi-agent system that lets a customer ask traffic-analytics questions in plain language and get answers backed by TomTom’s MOVE traffic data. Demoed by Product Management at CES 2026 Las Vegas to showcase GenAI-powered traffic analytics. Built on Agno’s Team abstraction with five specialised agents, served over the AG-UI v1.0 protocol so any AG-UI compatible client can drive it.
See it work (scripted demo)
Press play. The dialogue is hardcoded but mirrors how the actual system handles a multi-step traffic question, “Why is the I-405 corridor slower this Tuesday vs last Tuesday around morning rush?” The Agno Team coordinator delegates across three specialised agents, each backed by its own MCP tool, then composes a grounded answer.
The Team coordinator decides which agents to engage based on intent, and each agent has its own scoped MCP tool surface plus shared geocoding and web-search access. Final synthesis happens once, in the coordinator, after the agent outputs come back, the LLM never makes data-shaped decisions, only narrative-shaped ones.
Why a multi-agent system, not a single LLM call
Traffic analytics isn’t one question, it’s a workflow:
- “What’s congestion like in this corridor?” → needs route-monitoring data
- “What are the busiest junctions in city X?” → needs junction analytics ranked by volume
- “How does this area compare to last week?” → needs area-analytics + historical archive
Stuffing all of this into one LLM with one prompt produces vague answers and high latency. Splitting it into specialised agents with clear responsibilities produces tight, defensible outputs.
The five agents
| Agent | Core MCP tool | Responsibility |
|---|---|---|
| Routing | tomtom-routing | Real-time route calculation between locations, alternatives, traffic-aware ETAs, vehicle-specific profiles |
| Route Monitoring | tomtom-route-monitoring-details | Strategic route performance tracking, corridor-level congestion, recurring vs incident-driven slowdowns |
| Junction Analytics | tomtom-junction-live-data | Real-time intersection flow, turn-ratio analysis, queue-length tracking, signal-timing optimisation |
| Area Analysis | tomtom-area-analytics-results | Regional traffic patterns, congestion hotspots, network performance over polygons |
| Traffic Volumes | tomtom-traffic-volume-tile, …-segment-details | Average daily traffic, road-capacity assessment, segment-level volume comparison |
Every agent additionally has access to TomTom’s geocoding and POI MCP tools and web search via Tavily, so it can resolve vague place references and cross-reference public context (event listings, news, road-closure announcements) without round-tripping back to the coordinator.
How it talks to clients
- Agno Team abstraction for delegation, agent boundaries, and shared memory. Built on AgentOS underneath.
- AG-UI v1.0 protocol over HTTP with Server-Sent Events. Any AG-UI compatible chatbot client can drive the system. The agent emits 80+ event types across the run lifecycle (tool calls, intermediate reasoning, final tokens) so the UI can show progress instead of a black-box spinner.
- Persistent chat memory keyed by
thread_id, so multi-turn conversations keep context across questions. - Default flow for vague queries: a generic “how’s traffic in X?” automatically triggers area analytics + traffic-incident summaries, then surfaces follow-up suggestions for routes and junctions worth drilling into.
Deployment
Containerised with Docker, deployed to Kubernetes via GitHub Actions on every commit to main. Runtime config (TomTom API keys, Azure OpenAI endpoint, Tavily key) injected via env. NGINX in front for the VM-hosted variant used during demos.
My contribution
This is a team build inside TomTom. My contributions:
- Agent boundaries and tool wiring: which agent owns which MCP tool, how the Team coordinator delegates, where shared context lives.
- Prompt design: system prompts per agent, tool descriptions tuned to double as routing prompts so the coordinator picks the right delegate without needing a separate router LLM call.
- Guardrails: input validation on every tool call, output filtering, prompt-injection defences (system-prompt isolation, schema-validated tool-call args).
- Demo path: the specific scenarios shown at CES, end-to-end, on real production traffic data.
Why this earns a spot in projects
Multi-agent systems are easy to build as toys; production-readiness is what’s hard. This one had to work live, on stage, in front of strategic customers. That meant deterministic-enough behaviour, defensible answers grounded in real telemetry, and observability good enough to debug a flaky agent in real time. Lessons from this build directly informed how I think about every agentic system since, including the MCP servers and the Agent Toolkit.