Model & API Errors

🔧 Model and API troubleshooting: This page covers errors from your LLM model provider (Anthropic Claude, OpenAI GPT, Google Gemini, DeepSeek, Ollama, etc.). For config file or schema issues (e.g. "model not allowed"), see Configuration Problems. For gateway or channel issues, see the Troubleshooting Guide and Gateway Issues or Channel Errors. To set up or change providers, see Model Provider Setup.

Quick diagnostic

Run openclaw doctor first. It checks gateway health, config, and paths-including hints about model and API connectivity. Then use these commands to narrow down model/API issues:

# Full health check (config, gateway, model provider hints)
openclaw doctor

# Optional: try auto-fixing safe issues
openclaw doctor --fix

# Check status and which config is in use
openclaw status
openclaw status --all

# Validate configuration (model provider blocks, API keys referenced correctly)
openclaw config validate

# View recent logs (look for rate limit, 401, 429, 503, timeout messages)
openclaw logs --tail 50
openclaw logs --follow

See Quick diagnostics and the main Troubleshooting Guide for the full flow. If the agent never gets a response from the model, the cause is often an invalid API key, rate limit, or billing issue-covered below.

Summary: Model and API errors at a glance

Symptom / error Likely cause What to do
Rate limit exceeded (429) Too many requests to the provider in a short time Add usage limits in config; use a fallback model; throttle or retry. See Rate limit exceeded.
Billing / payment errors Out of credits, card declined, or account suspended Check provider dashboard (Anthropic, OpenAI, etc.); add payment method or credits. See Billing errors.
All models failed No configured model responded (key invalid, network, or all providers down) Verify API keys and network; add a second provider as fallback. See All models failed.
API key invalid (401 / 403) Wrong key, expired key, or key not set in config/env Regenerate key at provider; set in config or env; see Credential management. See API key invalid.
Model timeout Request took too long; provider slow or network issues Increase timeout in config; check network; try a faster model. See Model timeout.
Provider unreachable Network failure, firewall, or provider outage Check connectivity and firewall; check provider status page; use fallback provider. See Provider unreachable.

Rate limit exceeded (429)

Providers (Anthropic, OpenAI, Google, DeepSeek) limit how many requests you can send per minute or per day. When you exceed that limit, you get 429 Too Many Requests or similar. OpenClaw may log "rate limit exceeded" or the provider’s error message.

  • Configure usage limits: In your OpenClaw config, you can set per-provider or per-model limits (e.g. max requests per minute). See Model Provider Setup and the official documentation for your version.
  • Add a fallback model: Configure a second provider (e.g. OpenAI + Anthropic, or cloud + Ollama). When one hits rate limit, OpenClaw can try the next.
  • Reduce concurrency: If multiple agents or channels hit the same API at once, lower the number of concurrent requests or space out cron jobs.
  • Upgrade provider tier: Higher tiers often have higher rate limits; check your provider’s pricing and limits.

Billing errors

Errors like insufficient credits, payment required, or account suspended mean the provider has stopped serving requests because of billing. OpenClaw does not control billing-you fix this at the provider’s dashboard.

  • Anthropic (Claude): Log in at console.anthropic.com; check usage and payment method. Add credits or update card. See Anthropic Claude Setup.
  • OpenAI (GPT): Log in at platform.openai.com; check Billing → Usage and Payment method. Add payment method or increase limits. See OpenAI GPT Setup.
  • Google (Gemini): Check Google Cloud or AI Studio billing and quotas. See Google Gemini Setup.
  • DeepSeek / others: Use the provider’s dashboard to verify account status and balance.
  • Set spending limits: To avoid surprise bills, set usage or spending limits in the provider dashboard; OpenClaw will then get errors when the limit is hit, which you can handle with a fallback model or alerts. See Credential management for API key best practices.

All models failed

When OpenClaw reports all models failed or the agent never gets a response, every configured provider/model either returned an error or timed out. Common causes:

  • Invalid or missing API keys: All cloud providers need a valid API key. Run openclaw config validate and check that each provider block has the correct key (or env var). See API key invalid and Model Provider Setup.
  • Network or firewall: The machine running OpenClaw must reach the provider’s API (e.g. api.anthropic.com, api.openai.com). Test from the same host: curl -I https://api.anthropic.com (or the provider URL). See Provider unreachable.
  • Provider outage: Check the provider’s status page. If all providers are down, add a local fallback (e.g. Ollama) so the agent can still respond when cloud APIs are unavailable.
  • Model name or config wrong: If the agent requests a model that isn’t in your config or is misspelled, the request fails. See Model not allowed in Configuration Problems.

API key invalid (401 / 403)

401 Unauthorized or 403 Forbidden usually means the API key is wrong, expired, or not sent. OpenClaw reads keys from your config or from environment variables (recommended for security).

  • Verify the key: Copy the key from the provider’s dashboard (Anthropic, OpenAI, etc.) and ensure no extra spaces or line breaks. Regenerate the key if it may have been rotated or revoked.
  • Where the key is set: OpenClaw uses the config file (or env) you’re actually loading. Run openclaw doctor to see which config path is in use. Set the key in that config’s provider block or via the env var documented in the official docs (e.g. ANTHROPIC_API_KEY, OPENAI_API_KEY).
  • Use environment variables: Don’t put raw API keys in config files that might be committed. Use env vars and reference them in config. See Credential management.
  • Scopes and permissions: Some providers require the key to have specific scopes or roles. Check the provider’s documentation and ensure the key has access to the models you’re using.

Model timeout

If requests to the model time out, the provider took too long to respond. This can be due to long prompts, slow network, or provider load.

  • Increase timeout in config: OpenClaw and provider clients often allow a configurable timeout (e.g. 60s, 120s). Check your model provider config and the official documentation for timeout options.
  • Reduce input size: Very long context or attachments can slow the response. Consider summarizing or chunking if you hit timeouts on large inputs.
  • Try a faster model: Lighter models (e.g. Claude Haiku, GPT-3.5 Turbo, Gemini Flash) usually respond faster. Use them for high-volume or latency-sensitive tasks.
  • Check network: From the host running OpenClaw, test latency to the provider (e.g. ping api.openai.com). Use a stable connection; VPN or corporate firewalls can add delay.

Provider unreachable

When the provider’s API is unreachable, you may see connection refused, DNS errors, or timeouts before any HTTP status. Causes include network issues, firewall blocking outbound HTTPS, or provider outage.

  • Test connectivity: From the same machine that runs the OpenClaw gateway, run curl -v https://api.anthropic.com (or api.openai.com, etc.). If this fails, the host cannot reach the provider.
  • Firewall and proxy: Ensure outbound HTTPS (port 443) to the provider’s domains is allowed. If you use a proxy, configure OpenClaw (or Node) to use it if required by your environment.
  • Provider status: Check the provider’s status page (e.g. status.anthropic.com, status.openai.com). If there’s an outage, wait or switch to a fallback provider.
  • Local models as fallback: Ollama or other local endpoints don’t depend on external APIs. Configuring a local model as fallback keeps the agent working when cloud providers are unreachable.

Step-by-step model/API debugging

  1. Run diagnostics: openclaw doctor and openclaw doctor --fix.
  2. Validate config: openclaw config validate - ensure model provider blocks and key references are correct.
  3. Check logs: openclaw logs --tail 100 or openclaw logs --follow - look for 401, 403, 429, 503, timeout, or "all models failed".
  4. Verify API key: Confirm the key in the provider dashboard and in your config/env; test with a simple curl call to the provider if needed.
  5. Test connectivity: From the OpenClaw host, curl the provider’s API URL to rule out network/firewall.
  6. Add fallback: Configure a second model provider so that when one fails (rate limit, billing, outage), the other can respond. See Model Provider Setup.
  7. Restart gateway: After changing config or env vars, openclaw restart (or restart your service) so the gateway picks up the new settings.

Related guides

Getting help

If you're still stuck:

  • Quick diagnostics - Run openclaw doctor and basic checks
  • Troubleshooting Guide - Gateway, channels, config, skills, performance
  • Community Hub - Ask in Discord or community channels
  • GitHub issues - Search existing issues or open a new one (include OpenClaw version, provider name, redacted config snippet, and full error message)

When reporting, include: OpenClaw version, model provider(s) used, output of openclaw doctor (redact paths and keys), the exact error from logs, and a redacted config snippet (no API keys).