OpenAI GPT Setup

📘 Guide Status: This page covers configuring OpenAI’s GPT models as an OpenClaw model provider. For first-time OpenClaw setup, see the Quick Start Guide. For the full config layout, see Configuration File Structure.

Overview

OpenAI provides the GPT family of models-including GPT-5.2 (as featured on ChatGPT), GPT-4o, GPT-4 Turbo, and GPT-3.5 Turbo-used by millions for chat, coding, and automation. OpenClaw is model-agnostic: you can use GPT as the “brain” of your agent by configuring OpenAI as a model provider. Your OpenClaw agent then gets GPT’s capabilities-fast responses, strong general-purpose performance, broad tooling ecosystem-plus OpenClaw’s automation (files, browser, cron, messaging channels) and self-hosted control. Using the OpenAI API with OpenClaw lets you combine ChatGPT-level quality with autonomous agents that run on your infrastructure.

This guide covers:

  • Getting an OpenAI API key and where to set it
  • Adding the OpenAI provider block to your OpenClaw config
  • Choosing GPT model IDs (GPT-5.2, GPT-4o, GPT-4o mini, GPT-4 Turbo, GPT-3.5 Turbo) and context limits
  • Cost considerations and optional spending limits
  • OpenAI-specific troubleshooting and best practices
  • Links to agents, memory, and other model providers

Prerequisites

Step 1: Get Your OpenAI API Key

OpenAI uses API keys for authentication. Never commit keys to the config file; use environment variables or a secrets manager.

  1. Sign in at platform.openai.com (or api.openai.com for API overview).
  2. Go to API keys (or Settings → API keys) and create a new secret key. Copy it once-it won’t be shown again.
  3. Set the key in your environment, for example:
    export OPENAI_API_KEY="sk-..."
    On Windows (Command Prompt): set OPENAI_API_KEY=sk-.... For persistent setup, add to your shell profile or use a credential management approach recommended in our security best practices.

Step 2: Add the OpenAI Provider in Config

Edit your OpenClaw config with openclaw config edit (or open the file path shown by openclaw doctor). In the model providers section, add a block for OpenAI. Exact keys depend on your OpenClaw version; typical pattern:

  • Provider id: e.g. openai (you’ll reference this in your agent config).
  • API key: Reference the env var (e.g. OPENAI_API_KEY) so the key is not stored in the config file.
  • Models: List of allowed model IDs (see below).
  • Optional: Base URL (if using a proxy or Azure OpenAI), request timeout, or spending caps if your OpenClaw version supports them.

After editing, save the file. Restart the gateway if required by your version so it picks up the new provider. For the full config layout (gateway, channels, model providers, agents, skills, memory), see Configuration File Structure.

Step 3: Choose a GPT Model

OpenAI offers several GPT models. Use the model ID that matches the model you want (check OpenAI’s model list for the latest names, context lengths, and deprecations). Examples of common IDs:

Model Typical use
gpt-5.2 (or latest ID) Latest flagship (ChatGPT 5.2); built for professional work, coding, and long-running agents. Use the exact ID from OpenAI’s docs.
gpt-4o Flagship multimodal model; best for complex reasoning, long context, and high-quality responses. Good for demanding agent tasks.
gpt-4o-mini Faster and cheaper than GPT-4o; strong for most everyday agent workloads, coding, and high-volume use.
gpt-4-turbo / gpt-4-turbo-preview High capability with large context; check OpenAI docs for current ID. Good for long documents and complex analysis.
gpt-3.5-turbo Fast and low-cost; suitable for simple Q&A, high-volume or latency-sensitive tasks when top-tier reasoning isn’t required.

Model IDs can change (e.g. gpt-4-turbo-2024-04); always confirm in OpenAI’s documentation. In OpenClaw, your memory and agent context settings determine how much history is sent; keep context within your model’s limit to avoid errors and control API cost. GPT-4o and GPT-4 Turbo support large context windows; GPT-3.5 Turbo is cheaper per token for lighter workloads.

Step 4: Assign the Model to an Agent

In the agents section of your config, set the agent’s model to the OpenAI provider and the desired model ID (e.g. gpt-5.2, gpt-4o-mini, or gpt-4o). Only model IDs listed under your OpenAI provider block will be allowed. See Agent customization for system prompts, multiple agents, and skills.

Verification

Run openclaw status and ensure the gateway and config are valid. Send a test message to your agent on a connected channel (e.g. Telegram). If the agent responds using GPT, the provider is working. For diagnostics, use openclaw doctor and openclaw logs; see the troubleshooting guide and model and API errors for common failures.

Common Issues (OpenAI / GPT)

Issue Solution
Invalid API key / 401 Check that OPENAI_API_KEY is set in the environment the gateway uses, and that the key is correct and active in the OpenAI platform. Regenerate the key if needed.
Model not found / invalid model Use the exact model ID from OpenAI’s documentation; IDs change with new versions. Ensure the model is listed in your provider’s allowed models and is not deprecated.
Rate limit (429) OpenAI enforces tier-based rate limits. Reduce request frequency, use a tier upgrade if needed, or choose a faster model (e.g. gpt-4o-mini or gpt-3.5-turbo) for high volume.
Insufficient quota / billing error Add a payment method and ensure you have usage quota. Set usage limits in the OpenAI dashboard to avoid unexpected bills.
High cost or slow responses Use a smaller/faster model (e.g. gpt-4o-mini or gpt-3.5-turbo) where possible; reduce context window size; set spending limits in the OpenAI platform.

For more errors and fixes, see the full troubleshooting hub and model and API errors.

Best Practices

  • Secrets: Keep the API key in environment variables or a secrets manager; do not put it in the config file. See credential management and security best practices.
  • Context and cost: Match your memory and agent context to your model’s context limit; oversized context increases latency and API cost.
  • Updates: Keep OpenClaw and your config in sync with the official docs; check OpenAI’s docs for model ID and pricing changes.
  • Spending: Set usage or spending limits in the OpenAI platform to avoid unexpected bills.

Other Model Providers

You can use multiple providers: e.g. GPT-5.2 or GPT-4o for complex tasks and a local model via Ollama for simple or private ones. See the Model Providers overview and the per-provider guides: Anthropic Claude, Google Gemini, DeepSeek, Local Models (Ollama).

Next Steps

Related Resources