Model Provider Setup

📘 Hub page: OpenClaw is model-agnostic-you choose which LLM powers your agent. This page links to setup guides for each supported provider. New to OpenClaw? Start with the Quick Start Guide and the Configuration File Structure.

Overview

OpenClaw does not lock you into a single AI model. Its model-agnostic backend works with any LLM: cloud APIs (Anthropic Claude, OpenAI GPT, Google Gemini, DeepSeek) or local models via Ollama, LM Studio, or compatible endpoints. You configure one or more model providers in your OpenClaw config, assign a default (or per-agent) model, and your agent gets that model’s capabilities-reasoning, long context, coding-plus OpenClaw’s automation (files, browser, cron, messaging channels) and self-hosted control.

Choosing a provider affects cost, privacy, and capability: cloud APIs offer the strongest models and long context; local models (e.g. Ollama) keep data on your machine and avoid per-token fees. Many users combine both-e.g. Claude for complex tasks and a local model for high-volume or sensitive workflows. This hub links to step-by-step guides for each supported provider.

This section covers:

Supported Model Providers

Configure one or more providers in your OpenClaw config file. Each provider has its own block (API key or endpoint, model IDs, optional limits). Agents can be assigned a default model or use provider-specific models. The following are supported with dedicated guides on this site:

Provider Models Typical use Guide
Anthropic Claude 3.5 Sonnet, Opus, Haiku (long context) Reasoning, long documents, coding, analysis Anthropic Claude Setup
OpenAI GPT-5.2, GPT-4, GPT-4o, GPT-3.5 Turbo General purpose, fast responses, broad ecosystem OpenAI GPT Setup
Google Gemini Pro, Gemini Flash Multimodal, Google ecosystem integration Google Gemini Setup
DeepSeek DeepSeek Chat, Coder Cost-effective cloud option, coding DeepSeek Setup
Local (Ollama) Llama, Mistral, CodeLlama, etc. Privacy, no API cost, full data sovereignty Local Models (Ollama)

You can also use other OpenAI-compatible or custom endpoints (e.g. LM Studio, LocalAI) by configuring the appropriate provider type; see the official documentation for the latest schema.

Choosing Cloud vs Local

  • Cloud APIs (Claude, GPT, Gemini, DeepSeek): Best for strongest reasoning and long context; you pay per token. Suited for complex tasks, coding, and when you are okay with data leaving your network. Set API spending limits and store keys securely (e.g. environment variables).
  • Local models (Ollama, LM Studio): Data stays on your machine; no per-token cost after hardware. Ideal for privacy-sensitive or high-volume use, or when you prefer not to depend on external APIs. Requires adequate CPU/GPU and disk; see Local Models (Ollama) for requirements.
  • Hybrid: Use a cloud model for demanding tasks and a local model for routine or sensitive workflows by configuring multiple providers and assigning models per agent.

API Keys and Security

Model provider credentials (API keys, endpoints) must be kept secret. Follow these practices:

  • Store API keys in environment variables, not in the config file in plain text. Reference them in config (e.g. ANTHROPIC_API_KEY) so the key never appears in logs or backups.
  • Set spending limits in your cloud provider dashboard (Anthropic, OpenAI, etc.) to avoid unexpected bills.
  • Restrict file permissions on the config directory (e.g. chmod 600 on sensitive files). See the security best practices and credential management guides.
  • Use openclaw config edit from a secure environment; avoid committing keys to version control.

Quick Reference

  • Edit config: openclaw config edit - opens your main config file where model provider blocks are defined.
  • Check status: openclaw status - confirms gateway and provider connectivity.
  • Diagnose issues: openclaw doctor and openclaw doctor --fix - see the troubleshooting guide and model and API errors.

Config file location and structure are described in Configuration File Structure. After adding a provider, assign the model to your agents and optionally tune memory and advanced settings.

Common Issues

Issue Solution
API key invalid or rate limit Verify the key in the provider’s console; check usage and limits. Use env vars and avoid typos.
Model not found / not allowed Ensure the model ID matches the provider’s current list (e.g. claude-3-5-sonnet-20241022). See each provider guide for valid IDs.
Local model (Ollama) not responding Confirm Ollama is running and the model is pulled. Check endpoint URL and network (localhost).
Billing or quota errors Set spending limits in the cloud provider dashboard; check for exhausted quota or payment issues.

For more fixes, see Troubleshooting Hub and Model & API Errors.

Next Steps

Related Resources