OpenClaw Technical Architecture
How OpenClaw is built: a single Gateway process, pluggable channel adapters, the ClawHub skills system, persistent memory, and a model-agnostic LLM backend. This page summarizes the components and runtime requirements.
How the System Works
OpenClaw runs as one Gateway process on your machine or server. The Gateway receives messages from your connected chat apps (WhatsApp, Telegram, Discord, etc.), routes them to the AI agent, runs skills and tools, and stores context in the memory system. You configure and control it via the CLI and the Web Control UI. No separate service per channel-one Gateway serves all platforms.
Core Components
1. Gateway Daemon
Background service that manages connections, message routing, and agent lifecycle. It binds to a port (default 18789), runs the Web Control UI, and keeps sessions and context in sync. All channels and the agent talk through the Gateway.
2. CLI Interface
The openclaw command-line tool handles configuration, onboarding (openclaw onboard), channel login (openclaw channels login), starting the Gateway (openclaw gateway), and diagnostics. See the quick start guide for common commands.
3. Channel Adapters
Pluggable connectors for each messaging platform: WhatsApp, Telegram, Discord, Slack, Signal, iMessage, Teams, and more. Each adapter connects the Gateway to that platform’s API; one Gateway can run multiple channels at once. Setup is per channel: Telegram, WhatsApp, Discord, etc.
4. Skills / Extensions System (ClawHub)
Community marketplace for add-on capabilities. Skills extend what the agent can do (e.g. send email, manage calendar, call APIs). You install them from ClawHub or build your own. The workflow engine supports TypeScript and YAML for defining automations.
5. Memory System
Persistent context storage across all sessions and platforms. Conversations and state are stored so the agent “remembers” you. Storage backends can be SQLite, PostgreSQL, or Redis. Configure in config.
6. Model-Agnostic Backend
OpenClaw does not lock you to one LLM. You can use Anthropic Claude, OpenAI GPT, Google Gemini, DeepSeek, or local models via Ollama / LM Studio. Switch providers with a config change; see model provider setup.
Runtime Requirements
- Node.js: 22+ (required)
- Operating systems: macOS, Linux, Windows (WSL2 recommended on Windows)
- Deployment: Local machine, VPS, cloud (AWS/GCP/Azure), or Docker containers
For production, follow security overview and best practices (network isolation, API keys, sandbox options).
Next Steps on the OpenClaw Roadmap
- What is OpenClaw? - Definition and comparison to chatbots
- History - How OpenClaw evolved from Clawd to OpenClaw
- Quick start - Install and run the Gateway
- Configuration file - Where and how to configure
- Homepage - Full roadmap from zero to production