This guide walks through building a provider plugin that adds a model provider (LLM) to OpenClaw. By the end you will have a provider with a model catalog, API key auth, and dynamic model resolution.Documentation Index
Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
Use this file to discover all available pages before exploring further.
If you have not built any OpenClaw plugin before, read
Getting Started first for the basic package
structure and manifest setup.
Walkthrough
Package and manifest
Step 1: Package and manifest
providerAuthEnvVars so OpenClaw can detect
credentials without loading your plugin runtime. Add providerAuthAliases
when a provider variant should reuse another provider id’s auth. modelSupport
is optional and lets OpenClaw auto-load your provider plugin from shorthand
model ids like acme-large before runtime hooks exist. If you publish the
provider on ClawHub, those openclaw.compat and openclaw.build fields
are required in package.json.Register the provider
A minimal provider needs an That is a working provider. Users can now
id, label, auth, and catalog:index.ts
openclaw onboard --acme-ai-api-key <key> and select
acme-ai/acme-large as their model.If the upstream provider uses different control tokens than OpenClaw, add a
small bidirectional text transform instead of replacing the stream path:input rewrites the final system prompt and text message content before
transport. output rewrites assistant text deltas and final text before
OpenClaw parses its own control markers or channel delivery.For bundled providers that only register one text provider with API-key
auth plus a single catalog-backed runtime, prefer the narrower
defineSingleProviderPluginEntry(...) helper:buildProvider is the live catalog path used when OpenClaw can resolve real
provider auth. It may perform provider-specific discovery. Use
buildStaticProvider only for offline rows that are safe to show before auth
is configured; it must not require credentials or make network requests.
OpenClaw’s models list --all display currently executes static catalogs
only for bundled provider plugins, with an empty config, empty env, and no
agent/workspace paths.If your auth flow also needs to patch models.providers.*, aliases, and
the agent default model during onboarding, use the preset helpers from
openclaw/plugin-sdk/provider-onboard. The narrowest helpers are
createDefaultModelPresetAppliers(...),
createDefaultModelsPresetAppliers(...), and
createModelCatalogPresetAppliers(...).When a provider’s native endpoint supports streamed usage blocks on the
normal openai-completions transport, prefer the shared catalog helpers in
openclaw/plugin-sdk/provider-catalog-shared instead of hardcoding
provider-id checks. supportsNativeStreamingUsageCompat(...) and
applyProviderNativeStreamingUsageCompat(...) detect support from the
endpoint capability map, so native Moonshot/DashScope-style endpoints still
opt in even when a plugin is using a custom provider id.Add dynamic model resolution
If your provider accepts arbitrary model IDs (like a proxy or router),
add If resolving requires a network call, use
resolveDynamicModel:prepareDynamicModel for async
warm-up - resolveDynamicModel runs again after it completes.Add runtime hooks (as needed)
Most providers only need Available replay families today:
Available stream families today:
catalog + resolveDynamicModel. Add hooks
incrementally as your provider requires them.Shared helper builders now cover the most common replay/tool-compat
families, so plugins usually do not need to hand-wire each hook one by one:| Family | What it wires in | Bundled examples |
|---|---|---|
openai-compatible | Shared OpenAI-style replay policy for OpenAI-compatible transports, including tool-call-id sanitation, assistant-first ordering fixes, and generic Gemini-turn validation where the transport needs it | moonshot, ollama, xai, zai |
anthropic-by-model | Claude-aware replay policy chosen by modelId, so Anthropic-message transports only get Claude-specific thinking-block cleanup when the resolved model is actually a Claude id | amazon-bedrock, anthropic-vertex |
google-gemini | Native Gemini replay policy plus bootstrap replay sanitation and tagged reasoning-output mode | google, google-gemini-cli |
passthrough-gemini | Gemini thought-signature sanitation for Gemini models running through OpenAI-compatible proxy transports; does not enable native Gemini replay validation or bootstrap rewrites | openrouter, kilocode, opencode, opencode-go |
hybrid-anthropic-openai | Hybrid policy for providers that mix Anthropic-message and OpenAI-compatible model surfaces in one plugin; optional Claude-only thinking-block dropping stays scoped to the Anthropic side | minimax |
| Family | What it wires in | Bundled examples |
|---|---|---|
google-thinking | Gemini thinking payload normalization on the shared stream path | google, google-gemini-cli |
kilocode-thinking | Kilo reasoning wrapper on the shared proxy stream path, with kilo/auto and unsupported proxy reasoning ids skipping injected thinking | kilocode |
moonshot-thinking | Moonshot binary native-thinking payload mapping from config + /think level | moonshot |
minimax-fast-mode | MiniMax fast-mode model rewrite on the shared stream path | minimax, minimax-portal |
openai-responses-defaults | Shared native OpenAI/Codex Responses wrappers: attribution headers, /fast/serviceTier, text verbosity, native Codex web search, reasoning-compat payload shaping, and Responses context management | openai, openai-codex |
openrouter-thinking | OpenRouter reasoning wrapper for proxy routes, with unsupported-model/auto skips handled centrally | openrouter |
tool-stream-default-on | Default-on tool_stream wrapper for providers like Z.AI that want tool streaming unless explicitly disabled | zai |
SDK seams powering the family builders
SDK seams powering the family builders
Each family builder is composed from lower-level public helpers exported from the same package, which you can reach for when a provider needs to go off the common pattern:
openclaw/plugin-sdk/provider-model-shared-ProviderReplayFamily,buildProviderReplayFamilyHooks(...), and the raw replay builders (buildOpenAICompatibleReplayPolicy,buildAnthropicReplayPolicyForModel,buildGoogleGeminiReplayPolicy,buildHybridAnthropicOrOpenAIReplayPolicy). Also exports Gemini replay helpers (sanitizeGoogleGeminiReplayHistory,resolveTaggedReasoningOutputMode) and endpoint/model helpers (resolveProviderEndpoint,normalizeProviderId,normalizeGooglePreviewModelId,normalizeNativeXaiModelId).openclaw/plugin-sdk/provider-stream-ProviderStreamFamily,buildProviderStreamFamilyHooks(...),composeProviderStreamWrappers(...), plus the shared OpenAI/Codex wrappers (createOpenAIAttributionHeadersWrapper,createOpenAIFastModeWrapper,createOpenAIServiceTierWrapper,createOpenAIResponsesContextManagementWrapper,createCodexNativeWebSearchWrapper), DeepSeek V4 OpenAI-compatible wrapper (createDeepSeekV4OpenAICompatibleThinkingWrapper), Anthropic Messages thinking prefill cleanup (createAnthropicThinkingPrefillPayloadWrapper), and shared proxy/provider wrappers (createOpenRouterWrapper,createToolStreamWrapper,createMinimaxFastModeWrapper).openclaw/plugin-sdk/provider-tools-ProviderToolCompatFamily,buildProviderToolCompatFamilyHooks("gemini"), underlying Gemini schema helpers (normalizeGeminiToolSchemas,inspectGeminiToolSchemas), and xAI compat helpers (resolveXaiModelCompatPatch(),applyXaiModelCompat(model)). The bundled xAI plugin usesnormalizeResolvedModel+contributeResolvedModelCompatwith these to keep xAI rules owned by the provider.
@openclaw/anthropic-provider keeps wrapAnthropicProviderStream, resolveAnthropicBetas, resolveAnthropicFastMode, resolveAnthropicServiceTier, and the lower-level Anthropic wrapper builders in its own public api.ts / contract-api.ts seam because they encode Claude OAuth beta handling and context1m gating. The xAI plugin similarly keeps native xAI Responses shaping in its own wrapStreamFn (/fast aliases, default tool_stream, unsupported strict-tool cleanup, xAI-specific reasoning-payload removal).The same package-root pattern also backs @openclaw/openai-provider (provider builders, default-model helpers, realtime provider builders) and @openclaw/openrouter-provider (provider builder plus onboarding/config helpers).- Token exchange
- Custom headers
- Native transport identity
- Usage and billing
For providers that need a token exchange before each inference call:
All available provider hooks
All available provider hooks
OpenClaw calls hooks in this order. Most providers only use 2-3:
Compatibility-only provider fields that OpenClaw no longer calls, such as
Runtime fallback notes:
ProviderPlugin.capabilities and suppressBuiltInModel, are not listed
here.| # | Hook | When to use |
|---|---|---|
| 1 | catalog | Model catalog or base URL defaults |
| 2 | applyConfigDefaults | Provider-owned global defaults during config materialization |
| 3 | normalizeModelId | Legacy/preview model-id alias cleanup before lookup |
| 4 | normalizeTransport | Provider-family api / baseUrl cleanup before generic model assembly |
| 5 | normalizeConfig | Normalize models.providers.<id> config |
| 6 | applyNativeStreamingUsageCompat | Native streaming-usage compat rewrites for config providers |
| 7 | resolveConfigApiKey | Provider-owned env-marker auth resolution |
| 8 | resolveSyntheticAuth | Local/self-hosted or config-backed synthetic auth |
| 9 | shouldDeferSyntheticProfileAuth | Lower synthetic stored-profile placeholders behind env/config auth |
| 10 | resolveDynamicModel | Accept arbitrary upstream model IDs |
| 11 | prepareDynamicModel | Async metadata fetch before resolving |
| 12 | normalizeResolvedModel | Transport rewrites before the runner |
| 13 | contributeResolvedModelCompat | Compat flags for vendor models behind another compatible transport |
| 14 | normalizeToolSchemas | Provider-owned tool-schema cleanup before registration |
| 15 | inspectToolSchemas | Provider-owned tool-schema diagnostics |
| 16 | resolveReasoningOutputMode | Tagged vs native reasoning-output contract |
| 17 | prepareExtraParams | Default request params |
| 18 | createStreamFn | Fully custom StreamFn transport |
| 19 | wrapStreamFn | Custom headers/body wrappers on the normal stream path |
| 20 | resolveTransportTurnState | Native per-turn headers/metadata |
| 21 | resolveWebSocketSessionPolicy | Native WS session headers/cool-down |
| 22 | formatApiKey | Custom runtime token shape |
| 23 | refreshOAuth | Custom OAuth refresh |
| 24 | buildAuthDoctorHint | Auth repair guidance |
| 25 | matchesContextOverflowError | Provider-owned overflow detection |
| 26 | classifyFailoverReason | Provider-owned rate-limit/overload classification |
| 27 | isCacheTtlEligible | Prompt cache TTL gating |
| 28 | buildMissingAuthMessage | Custom missing-auth hint |
| 29 | augmentModelCatalog | Synthetic forward-compat rows |
| 30 | resolveThinkingProfile | Model-specific /think option set |
| 31 | isBinaryThinking | Binary thinking on/off compatibility |
| 32 | supportsXHighThinking | xhigh reasoning support compatibility |
| 33 | resolveDefaultThinkingLevel | Default /think policy compatibility |
| 34 | isModernModelRef | Live/smoke model matching |
| 35 | prepareRuntimeAuth | Token exchange before inference |
| 36 | resolveUsageAuth | Custom usage credential parsing |
| 37 | fetchUsageSnapshot | Custom usage endpoint |
| 38 | createEmbeddingProvider | Provider-owned embedding adapter for memory/search |
| 39 | buildReplayPolicy | Custom transcript replay/compaction policy |
| 40 | sanitizeReplayHistory | Provider-specific replay rewrites after generic cleanup |
| 41 | validateReplayTurns | Strict replay-turn validation before the embedded runner |
| 42 | onModelSelected | Post-selection callback (e.g. telemetry) |
normalizeConfigchecks the matched provider first, then other hook-capable provider plugins until one actually changes the config. If no provider hook rewrites a supported Google-family config entry, the bundled Google config normalizer still applies.resolveConfigApiKeyuses the provider hook when exposed. The bundledamazon-bedrockpath also has a built-in AWS env-marker resolver here, even though Bedrock runtime auth itself still uses the AWS SDK default chain.resolveSystemPromptContributionlets a provider inject cache-aware system-prompt guidance for a model family. Prefer it overbefore_prompt_buildwhen the behavior belongs to one provider/model family and should preserve the stable/dynamic cache split.
Add extra capabilities (optional)
Step 5: Add extra capabilities
A provider plugin can register speech, realtime transcription, realtime voice, media understanding, image generation, video generation, web fetch, and web search alongside text inference. OpenClaw classifies this as a hybrid-capability plugin - the recommended pattern for company plugins (one plugin per vendor). See Internals: Capability Ownership.Register each capability insideregister(api) alongside your existing
api.registerProvider(...) call. Pick only the tabs you need:- Speech (TTS)
- Realtime transcription
- Realtime voice
- Media understanding
- Image and video generation
- Web fetch and search
assertOkOrThrowProviderError(...) for provider HTTP failures so
plugins share capped error-body reads, JSON error parsing, and
request-id suffixes.Test
Step 6: Test
src/provider.test.ts
Publish to ClawHub
Provider plugins publish the same way as any other external code plugin:clawhub package publish.
File structure
Catalog order reference
catalog.order controls when your catalog merges relative to built-in
providers:
| Order | When | Use case |
|---|---|---|
simple | First pass | Plain API-key providers |
profile | After simple | Providers gated on auth profiles |
paired | After profile | Synthesize multiple related entries |
late | Last pass | Override existing providers (wins on collision) |
Next steps
- Channel Plugins - if your plugin also provides a channel
- SDK Runtime -
api.runtimehelpers (TTS, search, subagent) - SDK Overview - full subpath import reference
- Plugin Internals - hook details and bundled examples