OpenAI
OpenAI provides developer APIs for GPT models. Codex supports ChatGPT sign-in for subscription access or API key sign-in for usage-based access. Codex cloud requires ChatGPT sign-in. OpenAI explicitly supports subscription OAuth usage in external tools/workflows like OpenClaw.Option A: OpenAI API key (OpenAI Platform)
Best for: direct API access and usage-based billing. Get your API key from the OpenAI dashboard.CLI setup
Config snippet
gpt-5.4 and gpt-5.4-pro for direct
OpenAI API usage. OpenClaw forwards both through the openai/* Responses path.
OpenClaw intentionally suppresses the stale openai/gpt-5.3-codex-spark row,
because direct OpenAI API calls reject it in live traffic.
OpenClaw does not expose openai/gpt-5.3-codex-spark on the direct OpenAI
API path. pi-ai still ships a built-in row for that model, but live OpenAI API
requests currently reject it. Spark is treated as Codex-only in OpenClaw.
Option B: OpenAI Code (Codex) subscription
Best for: using ChatGPT/Codex subscription access instead of an API key. Codex cloud requires ChatGPT sign-in, while the Codex CLI supports ChatGPT or API key sign-in.CLI setup (Codex OAuth)
Config snippet (Codex subscription)
gpt-5.4 as the current Codex model. OpenClaw
maps that to openai-codex/gpt-5.4 for ChatGPT/Codex OAuth usage.
If your Codex account is entitled to Codex Spark, OpenClaw also supports:
openai-codex/gpt-5.3-codex-spark
openai/gpt-5.3-codex-spark API-key path.
OpenClaw also preserves openai-codex/gpt-5.3-codex-spark when pi-ai
discovers it. Treat it as entitlement-dependent and experimental: Codex Spark is
separate from GPT-5.4 /fast, and availability depends on the signed-in Codex /
ChatGPT account.
Transport default
OpenClaw usespi-ai for model streaming. For both openai/* and
openai-codex/*, default transport is "auto" (WebSocket-first, then SSE
fallback).
You can set agents.defaults.models.<provider/model>.params.transport:
"sse": force SSE"websocket": force WebSocket"auto": try WebSocket, then fall back to SSE
openai/* (Responses API), OpenClaw also enables WebSocket warm-up by
default (openaiWsWarmup: true) when WebSocket transport is used.
Related OpenAI docs:
OpenAI WebSocket warm-up
OpenAI docs describe warm-up as optional. OpenClaw enables it by default foropenai/* to reduce first-turn latency when using WebSocket transport.
Disable warm-up
Enable warm-up explicitly
OpenAI priority processing
OpenAI’s API exposes priority processing viaservice_tier=priority. In
OpenClaw, set agents.defaults.models["openai/<model>"].params.serviceTier to
pass that field through on direct openai/* Responses requests.
auto, default, flex, and priority.
OpenAI fast mode
OpenClaw exposes a shared fast-mode toggle for bothopenai/* and
openai-codex/* sessions:
- Chat/UI:
/fast status|on|off - Config:
agents.defaults.models["<provider>/<model>"].params.fastMode
reasoning.effort = "low"when the payload does not already specify reasoningtext.verbosity = "low"when the payload does not already specify verbosityservice_tier = "priority"for directopenai/*Responses calls toapi.openai.com
OpenAI Responses server-side compaction
For direct OpenAI Responses models (openai/* using api: "openai-responses" with
baseUrl on api.openai.com), OpenClaw now auto-enables OpenAI server-side
compaction payload hints:
- Forces
store: true(unless model compat setssupportsStore: false) - Injects
context_management: [{ type: "compaction", compact_threshold: ... }]
compact_threshold is 70% of model contextWindow (or 80000
when unavailable).
Enable server-side compaction explicitly
Use this when you want to forcecontext_management injection on compatible
Responses models (for example Azure OpenAI Responses):
Enable with a custom threshold
Disable server-side compaction
responsesServerCompaction only controls context_management injection.
Direct OpenAI Responses models still force store: true unless compat sets
supportsStore: false.
Notes
- Model refs always use
provider/model(see /concepts/models). - Auth details + reuse rules are in /concepts/oauth.