Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt

Use this file to discover all available pages before exploring further.

OpenAI provides developer APIs for GPT models, and Codex is also available as a ChatGPT-plan coding agent through OpenAI’s Codex clients. OpenClaw keeps those surfaces separate so config stays predictable. OpenClaw supports three OpenAI-family routes. The model prefix selects the provider/auth route; a separate runtime setting selects who executes the embedded agent loop:
  • API key — direct OpenAI Platform access with usage-based billing (openai/* models)
  • Codex subscription through PI — ChatGPT/Codex sign-in with subscription access (openai-codex/* models)
  • Codex app-server harness — native Codex app-server execution (openai/* models plus agents.defaults.agentRuntime.id: "codex")
OpenAI explicitly supports subscription OAuth usage in external tools and workflows like OpenClaw. Provider, model, runtime, and channel are separate layers. If those labels are getting mixed together, read Agent runtimes before changing config.

Quick choice

GoalUseNotes
Direct API-key billingopenai/gpt-5.5Set OPENAI_API_KEY or run OpenAI API-key onboarding.
GPT-5.5 with ChatGPT/Codex subscription authopenai-codex/gpt-5.5Default PI route for Codex OAuth. Best first choice for subscription setups.
GPT-5.5 with native Codex app-server behavioropenai/gpt-5.5 plus agentRuntime.id: "codex"Forces the Codex app-server harness for that model ref.
Image generation or editingopenai/gpt-image-2Works with either OPENAI_API_KEY or OpenAI Codex OAuth.
Transparent-background imagesopenai/gpt-image-1.5Use outputFormat=png or webp and openai.background=transparent.

Naming map

The names are similar but not interchangeable:
Name you seeLayerMeaning
openaiProvider prefixDirect OpenAI Platform API route.
openai-codexProvider prefixOpenAI Codex OAuth/subscription route through the normal OpenClaw PI runner.
codex pluginPluginBundled OpenClaw plugin that provides native Codex app-server runtime and /codex chat controls.
agentRuntime.id: codexAgent runtimeForce the native Codex app-server harness for embedded turns.
/codex ...Chat command setBind/control Codex app-server threads from a conversation.
runtime: "acp", agentId: "codex"ACP session routeExplicit fallback path that runs Codex through ACP/acpx.
This means a config can intentionally contain both openai-codex/* and the codex plugin. That is valid when you want Codex OAuth through PI and also want native /codex chat controls available. openclaw doctor warns about that combination so you can confirm it is intentional; it does not rewrite it.
GPT-5.5 is available through both direct OpenAI Platform API-key access and subscription/OAuth routes. Use openai/gpt-5.5 for direct OPENAI_API_KEY traffic, openai-codex/gpt-5.5 for Codex OAuth through PI, or openai/gpt-5.5 with agentRuntime.id: "codex" for the native Codex app-server harness.
Enabling the OpenAI plugin, or selecting an openai-codex/* model, does not enable the bundled Codex app-server plugin. OpenClaw enables that plugin only when you explicitly select the native Codex harness with agentRuntime.id: "codex" or use a legacy codex/* model ref. If the bundled codex plugin is enabled but openai-codex/* still resolves through PI, openclaw doctor warns and leaves the route unchanged.

OpenClaw feature coverage

OpenAI capabilityOpenClaw surfaceStatus
Chat / Responsesopenai/<model> model providerYes
Codex subscription modelsopenai-codex/<model> with openai-codex OAuthYes
Codex app-server harnessopenai/<model> with agentRuntime.id: codexYes
Server-side web searchNative OpenAI Responses toolYes, when web search is enabled and no provider pinned
Imagesimage_generateYes
Videosvideo_generateYes
Text-to-speechmessages.tts.provider: "openai" / ttsYes
Batch speech-to-texttools.media.audio / media understandingYes
Streaming speech-to-textVoice Call streaming.provider: "openai"Yes
Realtime voiceVoice Call realtime.provider: "openai" / Control UI TalkYes
Embeddingsmemory embedding providerYes

Memory embeddings

OpenClaw can use OpenAI, or an OpenAI-compatible embedding endpoint, for memory_search indexing and query embeddings:
{
  agents: {
    defaults: {
      memorySearch: {
        provider: "openai",
        model: "text-embedding-3-small",
      },
    },
  },
}
For OpenAI-compatible endpoints that require asymmetric embedding labels, set queryInputType and documentInputType under memorySearch. OpenClaw forwards those as provider-specific input_type request fields: query embeddings use queryInputType; indexed memory chunks and batch indexing use documentInputType. See the Memory configuration reference for the full example.

Getting started

Choose your preferred auth method and follow the setup steps.
Best for: direct API access and usage-based billing.
1

Get your API key

Create or copy an API key from the OpenAI Platform dashboard.
2

Run onboarding

openclaw onboard --auth-choice openai-api-key
Or pass the key directly:
openclaw onboard --openai-api-key "$OPENAI_API_KEY"
3

Verify the model is available

openclaw models list --provider openai

Route summary

Model refRuntime configRouteAuth
openai/gpt-5.5omitted / agentRuntime.id: "pi"Direct OpenAI Platform APIOPENAI_API_KEY
openai/gpt-5.4-miniomitted / agentRuntime.id: "pi"Direct OpenAI Platform APIOPENAI_API_KEY
openai/gpt-5.5agentRuntime.id: "codex"Codex app-server harnessCodex app-server
openai/* is the direct OpenAI API-key route unless you explicitly force the Codex app-server harness. Use openai-codex/* for Codex OAuth through the default PI runner, or use openai/gpt-5.5 with agentRuntime.id: "codex" for native Codex app-server execution.

Config example

{
  env: { OPENAI_API_KEY: "sk-..." },
  agents: { defaults: { model: { primary: "openai/gpt-5.5" } } },
}
OpenClaw does not expose openai/gpt-5.3-codex-spark. Live OpenAI API requests reject that model, and the current Codex catalog does not expose it either.

Native Codex app-server auth

The native Codex app-server harness uses openai/* model refs plus agentRuntime.id: "codex", but its auth is still account-based. OpenClaw selects auth in this order:
  1. An explicit OpenClaw openai-codex auth profile bound to the agent.
  2. The app-server’s existing account, such as a local Codex CLI ChatGPT sign-in.
  3. For local stdio app-server launches only, CODEX_API_KEY, then OPENAI_API_KEY, when the app-server reports no account and still requires OpenAI auth.
That means a local ChatGPT/Codex subscription sign-in is not replaced just because the gateway process also has OPENAI_API_KEY for direct OpenAI models or embeddings. Env API-key fallback is only the local stdio no-account path; it is not sent to WebSocket app-server connections. When a subscription-style Codex profile is selected, OpenClaw also keeps CODEX_API_KEY and OPENAI_API_KEY out of the spawned stdio app-server child and sends the selected credentials through the app-server login RPC.

Image generation

The bundled openai plugin registers image generation through the image_generate tool. It supports both OpenAI API-key image generation and Codex OAuth image generation through the same openai/gpt-image-2 model ref.
CapabilityOpenAI API keyCodex OAuth
Model refopenai/gpt-image-2openai/gpt-image-2
AuthOPENAI_API_KEYOpenAI Codex OAuth sign-in
TransportOpenAI Images APICodex Responses backend
Max images per request44
Edit modeEnabled (up to 5 reference images)Enabled (up to 5 reference images)
Size overridesSupported, including 2K/4K sizesSupported, including 2K/4K sizes
Aspect ratio / resolutionNot forwarded to OpenAI Images APIMapped to a supported size when safe
{
  agents: {
    defaults: {
      imageGenerationModel: { primary: "openai/gpt-image-2" },
    },
  },
}
See Image Generation for shared tool parameters, provider selection, and failover behavior.
gpt-image-2 is the default for both OpenAI text-to-image generation and image editing. gpt-image-1.5, gpt-image-1, and gpt-image-1-mini remain usable as explicit model overrides. Use openai/gpt-image-1.5 for transparent-background PNG/WebP output; the current gpt-image-2 API rejects background: "transparent". For a transparent-background request, agents should call image_generate with model: "openai/gpt-image-1.5", outputFormat: "png" or "webp", and background: "transparent"; the older openai.background provider option is still accepted. OpenClaw also protects the public OpenAI and OpenAI Codex OAuth routes by rewriting default openai/gpt-image-2 transparent requests to gpt-image-1.5; Azure and custom OpenAI-compatible endpoints keep their configured deployment/model names. The same setting is exposed for headless CLI runs:
openclaw infer image generate \
  --model openai/gpt-image-1.5 \
  --output-format png \
  --background transparent \
  --prompt "A simple red circle sticker on a transparent background" \
  --json
Use the same --output-format and --background flags with openclaw infer image edit when starting from an input file. --openai-background remains available as an OpenAI-specific alias. For Codex OAuth installs, keep the same openai/gpt-image-2 ref. When an openai-codex OAuth profile is configured, OpenClaw resolves that stored OAuth access token and sends image requests through the Codex Responses backend. It does not first try OPENAI_API_KEY or silently fall back to an API key for that request. Configure models.providers.openai explicitly with an API key, custom base URL, or Azure endpoint when you want the direct OpenAI Images API route instead. If that custom image endpoint is on a trusted LAN/private address, also set browser.ssrfPolicy.dangerouslyAllowPrivateNetwork: true; OpenClaw keeps private/internal OpenAI-compatible image endpoints blocked unless this opt-in is present. Generate:
/tool image_generate model=openai/gpt-image-2 prompt="A polished launch poster for OpenClaw on macOS" size=3840x2160 count=1
Generate a transparent PNG:
/tool image_generate model=openai/gpt-image-1.5 prompt="A simple red circle sticker on a transparent background" outputFormat=png background=transparent
Edit:
/tool image_generate model=openai/gpt-image-2 prompt="Preserve the object shape, change the material to translucent glass" image=/path/to/reference.png size=1024x1536

Video generation

The bundled openai plugin registers video generation through the video_generate tool.
CapabilityValue
Default modelopenai/sora-2
ModesText-to-video, image-to-video, single-video edit
Reference inputs1 image or 1 video
Size overridesSupported
Other overridesaspectRatio, resolution, audio, watermark are ignored with a tool warning
{
  agents: {
    defaults: {
      videoGenerationModel: { primary: "openai/sora-2" },
    },
  },
}
See Video Generation for shared tool parameters, provider selection, and failover behavior.

GPT-5 prompt contribution

OpenClaw adds a shared GPT-5 prompt contribution for GPT-5-family runs across providers. It applies by model id, so openai-codex/gpt-5.5, openai/gpt-5.5, openrouter/openai/gpt-5.5, opencode/gpt-5.5, and other compatible GPT-5 refs receive the same overlay. Older GPT-4.x models do not. The bundled native Codex harness uses the same GPT-5 behavior and heartbeat overlay through Codex app-server developer instructions, so openai/gpt-5.x sessions forced through agentRuntime.id: "codex" keep the same follow-through and proactive heartbeat guidance even though Codex owns the rest of the harness prompt. The GPT-5 contribution adds a tagged behavior contract for persona persistence, execution safety, tool discipline, output shape, completion checks, and verification. Channel-specific reply and silent-message behavior stays in the shared OpenClaw system prompt and outbound delivery policy. The GPT-5 guidance is always enabled for matching models. The friendly interaction-style layer is separate and configurable.
ValueEffect
"friendly" (default)Enable the friendly interaction-style layer
"on"Alias for "friendly"
"off"Disable only the friendly style layer
{
  agents: {
    defaults: {
      promptOverlays: {
        gpt5: { personality: "friendly" },
      },
    },
  },
}
Values are case-insensitive at runtime, so "Off" and "off" both disable the friendly style layer.
Legacy plugins.entries.openai.config.personality is still read as a compatibility fallback when the shared agents.defaults.promptOverlays.gpt5.personality setting is not set.

Voice and speech

The bundled openai plugin registers speech synthesis for the messages.tts surface.
SettingConfig pathDefault
Modelmessages.tts.providers.openai.modelgpt-4o-mini-tts
Voicemessages.tts.providers.openai.voicecoral
Speedmessages.tts.providers.openai.speed(unset)
Instructionsmessages.tts.providers.openai.instructions(unset, gpt-4o-mini-tts only)
Formatmessages.tts.providers.openai.responseFormatopus for voice notes, mp3 for files
API keymessages.tts.providers.openai.apiKeyFalls back to OPENAI_API_KEY
Base URLmessages.tts.providers.openai.baseUrlhttps://api.openai.com/v1
Available models: gpt-4o-mini-tts, tts-1, tts-1-hd. Available voices: alloy, ash, ballad, cedar, coral, echo, fable, juniper, marin, onyx, nova, sage, shimmer, verse.
{
  messages: {
    tts: {
      providers: {
        openai: { model: "gpt-4o-mini-tts", voice: "coral" },
      },
    },
  },
}
Set OPENAI_TTS_BASE_URL to override the TTS base URL without affecting the chat API endpoint.
The bundled openai plugin registers batch speech-to-text through OpenClaw’s media-understanding transcription surface.
  • Default model: gpt-4o-transcribe
  • Endpoint: OpenAI REST /v1/audio/transcriptions
  • Input path: multipart audio file upload
  • Supported by OpenClaw wherever inbound audio transcription uses tools.media.audio, including Discord voice-channel segments and channel audio attachments
To force OpenAI for inbound audio transcription:
{
  tools: {
    media: {
      audio: {
        models: [
          {
            type: "provider",
            provider: "openai",
            model: "gpt-4o-transcribe",
          },
        ],
      },
    },
  },
}
Language and prompt hints are forwarded to OpenAI when supplied by the shared audio media config or per-call transcription request.
The bundled openai plugin registers realtime transcription for the Voice Call plugin.
SettingConfig pathDefault
Modelplugins.entries.voice-call.config.streaming.providers.openai.modelgpt-4o-transcribe
Language...openai.language(unset)
Prompt...openai.prompt(unset)
Silence duration...openai.silenceDurationMs800
VAD threshold...openai.vadThreshold0.5
API key...openai.apiKeyFalls back to OPENAI_API_KEY
Uses a WebSocket connection to wss://api.openai.com/v1/realtime with G.711 u-law (g711_ulaw / audio/pcmu) audio. This streaming provider is for Voice Call’s realtime transcription path; Discord voice currently records short segments and uses the batch tools.media.audio transcription path instead.
The bundled openai plugin registers realtime voice for the Voice Call plugin.
SettingConfig pathDefault
Modelplugins.entries.voice-call.config.realtime.providers.openai.modelgpt-realtime-1.5
Voice...openai.voicealloy
Temperature...openai.temperature0.8
VAD threshold...openai.vadThreshold0.5
Silence duration...openai.silenceDurationMs500
API key...openai.apiKeyFalls back to OPENAI_API_KEY
Supports Azure OpenAI via azureEndpoint and azureDeployment config keys for backend realtime bridges. Supports bidirectional tool calling. Uses G.711 u-law audio format.
Control UI Talk uses OpenAI browser realtime sessions with a Gateway-minted ephemeral client secret and a direct browser WebRTC SDP exchange against the OpenAI Realtime API. Maintainer live verification is available with OPENAI_API_KEY=... GEMINI_API_KEY=... node --import tsx scripts/dev/realtime-talk-live-smoke.ts; the OpenAI leg mints a client secret in Node, generates a browser SDP offer with fake microphone media, posts it to OpenAI, and applies the SDP answer without logging secrets.

Azure OpenAI endpoints

The bundled openai provider can target an Azure OpenAI resource for image generation by overriding the base URL. On the image-generation path, OpenClaw detects Azure hostnames on models.providers.openai.baseUrl and switches to Azure’s request shape automatically.
Realtime voice uses a separate configuration path (plugins.entries.voice-call.config.realtime.providers.openai.azureEndpoint) and is not affected by models.providers.openai.baseUrl. See the Realtime voice accordion under Voice and speech for its Azure settings.
Use Azure OpenAI when:
  • You already have an Azure OpenAI subscription, quota, or enterprise agreement
  • You need regional data residency or compliance controls Azure provides
  • You want to keep traffic inside an existing Azure tenancy

Configuration

For Azure image generation through the bundled openai provider, point models.providers.openai.baseUrl at your Azure resource and set apiKey to the Azure OpenAI key (not an OpenAI Platform key):
{
  models: {
    providers: {
      openai: {
        baseUrl: "https://<your-resource>.openai.azure.com",
        apiKey: "<azure-openai-api-key>",
      },
    },
  },
}
OpenClaw recognizes these Azure host suffixes for the Azure image-generation route:
  • *.openai.azure.com
  • *.services.ai.azure.com
  • *.cognitiveservices.azure.com
For image-generation requests on a recognized Azure host, OpenClaw:
  • Sends the api-key header instead of Authorization: Bearer
  • Uses deployment-scoped paths (/openai/deployments/{deployment}/...)
  • Appends ?api-version=... to each request
  • Uses a 600s default request timeout for Azure image-generation calls. Per-call timeoutMs values still override this default.
Other base URLs (public OpenAI, OpenAI-compatible proxies) keep the standard OpenAI image request shape.
Azure routing for the openai provider’s image-generation path requires OpenClaw 2026.4.22 or later. Earlier versions treat any custom openai.baseUrl like the public OpenAI endpoint and will fail against Azure image deployments.

API version

Set AZURE_OPENAI_API_VERSION to pin a specific Azure preview or GA version for the Azure image-generation path:
export AZURE_OPENAI_API_VERSION="2024-12-01-preview"
The default is 2024-12-01-preview when the variable is unset.

Model names are deployment names

Azure OpenAI binds models to deployments. For Azure image-generation requests routed through the bundled openai provider, the model field in OpenClaw must be the Azure deployment name you configured in the Azure portal, not the public OpenAI model id. If you create a deployment called gpt-image-2-prod that serves gpt-image-2:
/tool image_generate model=openai/gpt-image-2-prod prompt="A clean poster" size=1024x1024 count=1
The same deployment-name rule applies to image-generation calls routed through the bundled openai provider.

Regional availability

Azure image generation is currently available only in a subset of regions (for example eastus2, swedencentral, polandcentral, westus3, uaenorth). Check Microsoft’s current region list before creating a deployment, and confirm the specific model is offered in your region.

Parameter differences

Azure OpenAI and public OpenAI do not always accept the same image parameters. Azure may reject options that public OpenAI allows (for example certain background values on gpt-image-2) or expose them only on specific model versions. These differences come from Azure and the underlying model, not OpenClaw. If an Azure request fails with a validation error, check the parameter set supported by your specific deployment and API version in the Azure portal.
Azure OpenAI uses native transport and compat behavior but does not receive OpenClaw’s hidden attribution headers — see the Native vs OpenAI-compatible routes accordion under Advanced configuration.For chat or Responses traffic on Azure (beyond image generation), use the onboarding flow or a dedicated Azure provider config — openai.baseUrl alone does not pick up the Azure API/auth shape. A separate azure-openai-responses/* provider exists; see the Server-side compaction accordion below.

Advanced configuration

OpenClaw uses WebSocket-first with SSE fallback ("auto") for both openai/* and openai-codex/*.In "auto" mode, OpenClaw:
  • Retries one early WebSocket failure before falling back to SSE
  • After a failure, marks WebSocket as degraded for ~60 seconds and uses SSE during cool-down
  • Attaches stable session and turn identity headers for retries and reconnects
  • Normalizes usage counters (input_tokens / prompt_tokens) across transport variants
ValueBehavior
"auto" (default)WebSocket first, SSE fallback
"sse"Force SSE only
"websocket"Force WebSocket only
{
  agents: {
    defaults: {
      models: {
        "openai/gpt-5.5": {
          params: { transport: "auto" },
        },
        "openai-codex/gpt-5.5": {
          params: { transport: "auto" },
        },
      },
    },
  },
}
Related OpenAI docs:
OpenClaw enables WebSocket warm-up by default for openai/* and openai-codex/* to reduce first-turn latency.
// Disable warm-up
{
  agents: {
    defaults: {
      models: {
        "openai/gpt-5.5": {
          params: { openaiWsWarmup: false },
        },
      },
    },
  },
}
OpenClaw exposes a shared fast-mode toggle for openai/* and openai-codex/*:
  • Chat/UI: /fast status|on|off
  • Config: agents.defaults.models["<provider>/<model>"].params.fastMode
When enabled, OpenClaw maps fast mode to OpenAI priority processing (service_tier = "priority"). Existing service_tier values are preserved, and fast mode does not rewrite reasoning or text.verbosity.
{
  agents: {
    defaults: {
      models: {
        "openai/gpt-5.5": { params: { fastMode: true } },
      },
    },
  },
}
Session overrides win over config. Clearing the session override in the Sessions UI returns the session to the configured default.
OpenAI’s API exposes priority processing via service_tier. Set it per model in OpenClaw:
{
  agents: {
    defaults: {
      models: {
        "openai/gpt-5.5": { params: { serviceTier: "priority" } },
      },
    },
  },
}
Supported values: auto, default, flex, priority.
serviceTier is only forwarded to native OpenAI endpoints (api.openai.com) and native Codex endpoints (chatgpt.com/backend-api). If you route either provider through a proxy, OpenClaw leaves service_tier untouched.
For direct OpenAI Responses models (openai/* on api.openai.com), the OpenAI plugin’s Pi-harness stream wrapper auto-enables server-side compaction:
  • Forces store: true (unless model compat sets supportsStore: false)
  • Injects context_management: [{ type: "compaction", compact_threshold: ... }]
  • Default compact_threshold: 70% of contextWindow (or 80000 when unavailable)
This applies to the built-in Pi harness path and to OpenAI provider hooks used by embedded runs. The native Codex app-server harness manages its own context through Codex and is configured separately with agents.defaults.agentRuntime.id.
Useful for compatible endpoints like Azure OpenAI Responses:
{
  agents: {
    defaults: {
      models: {
        "azure-openai-responses/gpt-5.5": {
          params: { responsesServerCompaction: true },
        },
      },
    },
  },
}
responsesServerCompaction only controls context_management injection. Direct OpenAI Responses models still force store: true unless compat sets supportsStore: false.
For GPT-5-family runs on openai/*, OpenClaw can use a stricter embedded execution contract:
{
  agents: {
    defaults: {
      embeddedPi: { executionContract: "strict-agentic" },
    },
  },
}
With strict-agentic, OpenClaw:
  • No longer treats a plan-only turn as successful progress when a tool action is available
  • Retries the turn with an act-now steer
  • Auto-enables update_plan for substantial work
  • Surfaces an explicit blocked state if the model keeps planning without acting
Scoped to OpenAI and Codex GPT-5-family runs only. Other providers and older model families keep default behavior.
OpenClaw treats direct OpenAI, Codex, and Azure OpenAI endpoints differently from generic OpenAI-compatible /v1 proxies:Native routes (openai/*, Azure OpenAI):
  • Keep reasoning: { effort: "none" } only for models that support the OpenAI none effort
  • Omit disabled reasoning for models or proxies that reject reasoning.effort: "none"
  • Default tool schemas to strict mode
  • Attach hidden attribution headers on verified native hosts only
  • Keep OpenAI-only request shaping (service_tier, store, reasoning-compat, prompt-cache hints)
Proxy/compatible routes:
  • Use looser compat behavior
  • Strip Completions store from non-native openai-completions payloads
  • Accept advanced params.extra_body/params.extraBody pass-through JSON for OpenAI-compatible Completions proxies
  • Accept params.chat_template_kwargs for OpenAI-compatible Completions proxies such as vLLM
  • Do not force strict tool schemas or native-only headers
Azure OpenAI uses native transport and compat behavior but does not receive the hidden attribution headers.

Model selection

Choosing providers, model refs, and failover behavior.

Image generation

Shared image tool parameters and provider selection.

Video generation

Shared video tool parameters and provider selection.

OAuth and auth

Auth details and credential reuse rules.