Google Gemini Setup

📘 Guide Status: This page covers configuring Google’s Gemini models as an OpenClaw model provider. For first-time OpenClaw setup, see the Quick Start Guide. For the full config layout, see Configuration File Structure.

Overview

Google Gemini is Google’s family of large language models available via the Gemini API. Gemini offers strong multimodal capabilities (text, images, and more), a generous free tier for development, and competitive pricing for production. OpenClaw is model-agnostic: you can use Gemini as the “brain” of your agent by configuring Google as a model provider. Your OpenClaw agent then gets Gemini’s capabilities-reasoning, long context, and optional vision-plus OpenClaw’s automation (files, browser, cron, messaging channels) and self-hosted control.

This guide covers:

  • Getting a Gemini API key from Google AI Studio and where to set it
  • Adding the Google provider block to your OpenClaw config
  • Choosing Gemini model IDs (1.5 Pro, 1.5 Flash, 2.0 Flash) and context limits
  • Quota management, free tier limits, and cost considerations
  • Google/Gemini-specific troubleshooting and best practices
  • Links to agents, memory, and other model providers

Prerequisites

Step 1: Get Your Gemini API Key

Google provides the Gemini API through Google AI Studio. Use an API key for authentication. Never commit keys to the config file; use environment variables or a secrets manager.

  1. Go to aistudio.google.com and sign in with your Google account.
  2. Open “Get API key” or the API keys section (layout may vary). Create a new API key for the Gemini API.
  3. Copy the key. Set it in your environment. OpenClaw often expects one of these variable names (check your OpenClaw version or official docs):
    export GOOGLE_API_KEY="AIza..."
    or
    export GOOGLE_GENERATIVE_AI_API_KEY="AIza..."
    On Windows (Command Prompt): set GOOGLE_API_KEY=AIza.... For persistent setup, add to your shell profile or use a credential management approach from our security best practices.

Google AI Studio offers a free tier with rate and daily limits-useful for trying Gemini with OpenClaw before enabling billing. For higher quotas or production use, enable billing in Google Cloud Console and use the same API key or a key from a project with billing enabled.

Step 2: Add the Google Provider in Config

Edit your OpenClaw config with openclaw config edit (or open the file path shown by openclaw doctor). In the model providers section, add a block for Google. Exact keys depend on your OpenClaw version; typical pattern:

  • Provider id: e.g. google or gemini (you’ll reference this in your agent config).
  • API key: Reference the env var (e.g. GOOGLE_API_KEY or GOOGLE_GENERATIVE_AI_API_KEY) so the key is not stored in the config file.
  • Models: List of allowed Gemini model IDs (see below).
  • Optional: Request timeout, rate limits, or base URL if your OpenClaw version supports them (e.g. for Vertex AI or custom endpoints).

After editing, save the file. Restart the gateway if required by your version so it picks up the new provider. For the full config layout (gateway, channels, model providers, agents, skills, memory), see Configuration File Structure.

Step 3: Choose a Gemini Model

Google offers several Gemini models. Use the model ID that matches the model you want (check Google’s Gemini model list for the latest names, context lengths, and pricing). Examples of common IDs:

Model Typical use
Gemini 1.5 Pro Strong reasoning and long context; good for complex agent tasks, analysis, and coding.
Gemini 1.5 Flash Faster and cheaper; good balance of speed and quality for most agent workloads.
Gemini 1.5 Flash-8B Lightweight and low-cost; suitable for simple Q&A and high-volume, low-latency tasks.
Gemini 2.0 Flash (or latest) Newer generation; check Google’s docs for current 2.0 model IDs and capabilities.

Gemini models support large context windows (e.g. 1M tokens on some variants). In OpenClaw, your memory and agent context settings determine how much history is sent; keep context within your model’s limit to avoid errors and control API cost.

Step 4: Assign the Model to an Agent

In the agents section of your config, set the agent’s model to the Google provider and the desired Gemini model ID (e.g. gemini-1.5-pro, gemini-1.5-flash, or the current ID from Google’s docs). Only model IDs listed under your Google provider block will be allowed. See Agent customization for system prompts, multiple agents, and skills.

Verification

Run openclaw status and ensure the gateway and config are valid. Send a test message to your agent on a connected channel (e.g. Telegram). If the agent responds using Gemini, the provider is working. For diagnostics, use openclaw doctor and openclaw logs; see the troubleshooting guide and model and API errors for common failures.

Common Issues (Google / Gemini)

Issue Solution
Invalid API key / 401 / 403 Check that GOOGLE_API_KEY (or the env var your config uses) is set in the environment the gateway uses, and that the key is correct and active in Google AI Studio. Ensure the Gemini API is enabled for the project.
Model not found / invalid model Use the exact model ID from Google’s documentation; IDs can change with new versions (e.g. gemini-1.5-pro-latest vs a specific version). Ensure the model is listed in your provider’s allowed models.
Quota exceeded / 429 / 503 Google enforces per-minute and daily quotas. On the free tier, limits are lower. Reduce request frequency, use a lighter model (e.g. Flash), or enable billing and higher quotas in Google Cloud.
Resource exhausted / billing not enabled Some models or regions require billing to be enabled on the Google Cloud project. Check Google Cloud Console and the Gemini API quotas.
High cost or slow responses Use a smaller/faster model (e.g. Gemini 1.5 Flash or Flash-8B) where possible; reduce context window size; set budget alerts in Google Cloud.

For more errors and fixes, see the full troubleshooting hub and model and API errors.

Best Practices

  • Secrets: Keep the API key in environment variables or a secrets manager; do not put it in the config file. See credential management and security best practices.
  • Quota and free tier: Monitor usage in Google AI Studio or Cloud Console. Free tier has limits; for production, enable billing and set budget alerts to avoid surprises.
  • Context and cost: Match your memory and agent context to your model’s context limit; oversized context increases latency and API cost.
  • Updates: Keep OpenClaw and your config in sync with the official docs; check Google’s Gemini docs for model ID and pricing changes.

Other Model Providers

You can use multiple providers: e.g. Gemini for cost-effective or multimodal tasks and Claude for complex reasoning. See the Model Providers overview and the per-provider guides: Anthropic Claude, OpenAI GPT, DeepSeek, Local Models (Ollama).

Next Steps

Related Resources