OpenAI
OpenAI provides developer APIs for GPT models. Codex supports ChatGPT sign-in for subscription access or API key sign-in for usage-based access. Codex cloud requires ChatGPT sign-in.Option A: OpenAI API key (OpenAI Platform)
Best for: direct API access and usage-based billing. Get your API key from the OpenAI dashboard.CLI setup
Config snippet
Option B: OpenAI Code (Codex) subscription
Best for: using ChatGPT/Codex subscription access instead of an API key. Codex cloud requires ChatGPT sign-in, while the Codex CLI supports ChatGPT or API key sign-in.CLI setup (Codex OAuth)
Config snippet (Codex subscription)
Codex transport default
OpenClaw usespi-ai for model streaming. For openai-codex/* models you can set
agents.defaults.models.<provider/model>.params.transport to select transport:
- Default is
"auto"(WebSocket-first, then SSE fallback). "sse": force SSE"websocket": force WebSocket"auto": try WebSocket, then fall back to SSE
OpenAI Responses server-side compaction
For direct OpenAI Responses models (openai/* using api: "openai-responses" with
baseUrl on api.openai.com), OpenClaw now auto-enables OpenAI server-side
compaction payload hints:
- Forces
store: true(unless model compat setssupportsStore: false) - Injects
context_management: [{ type: "compaction", compact_threshold: ... }]
compact_threshold is 70% of model contextWindow (or 80000
when unavailable).
Enable server-side compaction explicitly
Use this when you want to forcecontext_management injection on compatible
Responses models (for example Azure OpenAI Responses):
Enable with a custom threshold
Disable server-side compaction
responsesServerCompaction only controls context_management injection.
Direct OpenAI Responses models still force store: true unless compat sets
supportsStore: false.
Notes
- Model refs always use
provider/model(see /concepts/models). - Auth details + reuse rules are in /concepts/oauth.