Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt

Use this file to discover all available pages before exploring further.

This guide walks through building a provider plugin that adds a model provider (LLM) to OpenClaw. By the end you will have a provider with a model catalog, API key auth, and dynamic model resolution.
If you have not built any OpenClaw plugin before, read Getting Started first for the basic package structure and manifest setup.
Provider plugins add models to OpenClaw’s normal inference loop. If the model must run through a native agent daemon that owns threads, compaction, or tool events, pair the provider with an agent harness instead of putting daemon protocol details in core.

Walkthrough

1

Package and manifest

Step 1: Package and manifest

{
  "name": "@myorg/openclaw-acme-ai",
  "version": "1.0.0",
  "type": "module",
  "openclaw": {
    "extensions": ["./index.ts"],
    "providers": ["acme-ai"],
    "compat": {
      "pluginApi": ">=2026.3.24-beta.2",
      "minGatewayVersion": "2026.3.24-beta.2"
    },
    "build": {
      "openclawVersion": "2026.3.24-beta.2",
      "pluginSdkVersion": "2026.3.24-beta.2"
    }
  }
}
The manifest declares providerAuthEnvVars so OpenClaw can detect credentials without loading your plugin runtime. Add providerAuthAliases when a provider variant should reuse another provider id’s auth. modelSupport is optional and lets OpenClaw auto-load your provider plugin from shorthand model ids like acme-large before runtime hooks exist. If you publish the provider on ClawHub, those openclaw.compat and openclaw.build fields are required in package.json.
2

Register the provider

A minimal provider needs an id, label, auth, and catalog:
index.ts
import { definePluginEntry } from "openclaw/plugin-sdk/plugin-entry";
import { createProviderApiKeyAuthMethod } from "openclaw/plugin-sdk/provider-auth";

export default definePluginEntry({
  id: "acme-ai",
  name: "Acme AI",
  description: "Acme AI model provider",
  register(api) {
    api.registerProvider({
      id: "acme-ai",
      label: "Acme AI",
      docsPath: "/providers/acme-ai",
      envVars: ["ACME_AI_API_KEY"],

      auth: [
        createProviderApiKeyAuthMethod({
          providerId: "acme-ai",
          methodId: "api-key",
          label: "Acme AI API key",
          hint: "API key from your Acme AI dashboard",
          optionKey: "acmeAiApiKey",
          flagName: "--acme-ai-api-key",
          envVar: "ACME_AI_API_KEY",
          promptMessage: "Enter your Acme AI API key",
          defaultModel: "acme-ai/acme-large",
        }),
      ],

      catalog: {
        order: "simple",
        run: async (ctx) => {
          const apiKey =
            ctx.resolveProviderApiKey("acme-ai").apiKey;
          if (!apiKey) return null;
          return {
            provider: {
              baseUrl: "https://api.acme-ai.com/v1",
              apiKey,
              api: "openai-completions",
              models: [
                {
                  id: "acme-large",
                  name: "Acme Large",
                  reasoning: true,
                  input: ["text", "image"],
                  cost: { input: 3, output: 15, cacheRead: 0.3, cacheWrite: 3.75 },
                  contextWindow: 200000,
                  maxTokens: 32768,
                },
                {
                  id: "acme-small",
                  name: "Acme Small",
                  reasoning: false,
                  input: ["text"],
                  cost: { input: 1, output: 5, cacheRead: 0.1, cacheWrite: 1.25 },
                  contextWindow: 128000,
                  maxTokens: 8192,
                },
              ],
            },
          };
        },
      },
    });
  },
});
That is a working provider. Users can now openclaw onboard --acme-ai-api-key <key> and select acme-ai/acme-large as their model.If the upstream provider uses different control tokens than OpenClaw, add a small bidirectional text transform instead of replacing the stream path:
api.registerTextTransforms({
  input: [
    { from: /red basket/g, to: "blue basket" },
    { from: /paper ticket/g, to: "digital ticket" },
    { from: /left shelf/g, to: "right shelf" },
  ],
  output: [
    { from: /blue basket/g, to: "red basket" },
    { from: /digital ticket/g, to: "paper ticket" },
    { from: /right shelf/g, to: "left shelf" },
  ],
});
input rewrites the final system prompt and text message content before transport. output rewrites assistant text deltas and final text before OpenClaw parses its own control markers or channel delivery.For bundled providers that only register one text provider with API-key auth plus a single catalog-backed runtime, prefer the narrower defineSingleProviderPluginEntry(...) helper:
import { defineSingleProviderPluginEntry } from "openclaw/plugin-sdk/provider-entry";

export default defineSingleProviderPluginEntry({
  id: "acme-ai",
  name: "Acme AI",
  description: "Acme AI model provider",
  provider: {
    label: "Acme AI",
    docsPath: "/providers/acme-ai",
    auth: [
      {
        methodId: "api-key",
        label: "Acme AI API key",
        hint: "API key from your Acme AI dashboard",
        optionKey: "acmeAiApiKey",
        flagName: "--acme-ai-api-key",
        envVar: "ACME_AI_API_KEY",
        promptMessage: "Enter your Acme AI API key",
        defaultModel: "acme-ai/acme-large",
      },
    ],
    catalog: {
      buildProvider: () => ({
        api: "openai-completions",
        baseUrl: "https://api.acme-ai.com/v1",
        models: [{ id: "acme-large", name: "Acme Large" }],
      }),
      buildStaticProvider: () => ({
        api: "openai-completions",
        baseUrl: "https://api.acme-ai.com/v1",
        models: [{ id: "acme-large", name: "Acme Large" }],
      }),
    },
  },
});
buildProvider is the live catalog path used when OpenClaw can resolve real provider auth. It may perform provider-specific discovery. Use buildStaticProvider only for offline rows that are safe to show before auth is configured; it must not require credentials or make network requests. OpenClaw’s models list --all display currently executes static catalogs only for bundled provider plugins, with an empty config, empty env, and no agent/workspace paths.If your auth flow also needs to patch models.providers.*, aliases, and the agent default model during onboarding, use the preset helpers from openclaw/plugin-sdk/provider-onboard. The narrowest helpers are createDefaultModelPresetAppliers(...), createDefaultModelsPresetAppliers(...), and createModelCatalogPresetAppliers(...).When a provider’s native endpoint supports streamed usage blocks on the normal openai-completions transport, prefer the shared catalog helpers in openclaw/plugin-sdk/provider-catalog-shared instead of hardcoding provider-id checks. supportsNativeStreamingUsageCompat(...) and applyProviderNativeStreamingUsageCompat(...) detect support from the endpoint capability map, so native Moonshot/DashScope-style endpoints still opt in even when a plugin is using a custom provider id.
3

Add dynamic model resolution

If your provider accepts arbitrary model IDs (like a proxy or router), add resolveDynamicModel:
api.registerProvider({
  // ... id, label, auth, catalog from above

  resolveDynamicModel: (ctx) => ({
    id: ctx.modelId,
    name: ctx.modelId,
    provider: "acme-ai",
    api: "openai-completions",
    baseUrl: "https://api.acme-ai.com/v1",
    reasoning: false,
    input: ["text"],
    cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
    contextWindow: 128000,
    maxTokens: 8192,
  }),
});
If resolving requires a network call, use prepareDynamicModel for async warm-up - resolveDynamicModel runs again after it completes.
4

Add runtime hooks (as needed)

Most providers only need catalog + resolveDynamicModel. Add hooks incrementally as your provider requires them.Shared helper builders now cover the most common replay/tool-compat families, so plugins usually do not need to hand-wire each hook one by one:
import { buildProviderReplayFamilyHooks } from "openclaw/plugin-sdk/provider-model-shared";
import { buildProviderStreamFamilyHooks } from "openclaw/plugin-sdk/provider-stream";
import { buildProviderToolCompatFamilyHooks } from "openclaw/plugin-sdk/provider-tools";

const GOOGLE_FAMILY_HOOKS = {
  ...buildProviderReplayFamilyHooks({ family: "google-gemini" }),
  ...buildProviderStreamFamilyHooks("google-thinking"),
  ...buildProviderToolCompatFamilyHooks("gemini"),
};

api.registerProvider({
  id: "acme-gemini-compatible",
  // ...
  ...GOOGLE_FAMILY_HOOKS,
});
Available replay families today:
FamilyWhat it wires inBundled examples
openai-compatibleShared OpenAI-style replay policy for OpenAI-compatible transports, including tool-call-id sanitation, assistant-first ordering fixes, and generic Gemini-turn validation where the transport needs itmoonshot, ollama, xai, zai
anthropic-by-modelClaude-aware replay policy chosen by modelId, so Anthropic-message transports only get Claude-specific thinking-block cleanup when the resolved model is actually a Claude idamazon-bedrock, anthropic-vertex
google-geminiNative Gemini replay policy plus bootstrap replay sanitation and tagged reasoning-output modegoogle, google-gemini-cli
passthrough-geminiGemini thought-signature sanitation for Gemini models running through OpenAI-compatible proxy transports; does not enable native Gemini replay validation or bootstrap rewritesopenrouter, kilocode, opencode, opencode-go
hybrid-anthropic-openaiHybrid policy for providers that mix Anthropic-message and OpenAI-compatible model surfaces in one plugin; optional Claude-only thinking-block dropping stays scoped to the Anthropic sideminimax
Available stream families today:
FamilyWhat it wires inBundled examples
google-thinkingGemini thinking payload normalization on the shared stream pathgoogle, google-gemini-cli
kilocode-thinkingKilo reasoning wrapper on the shared proxy stream path, with kilo/auto and unsupported proxy reasoning ids skipping injected thinkingkilocode
moonshot-thinkingMoonshot binary native-thinking payload mapping from config + /think levelmoonshot
minimax-fast-modeMiniMax fast-mode model rewrite on the shared stream pathminimax, minimax-portal
openai-responses-defaultsShared native OpenAI/Codex Responses wrappers: attribution headers, /fast/serviceTier, text verbosity, native Codex web search, reasoning-compat payload shaping, and Responses context managementopenai, openai-codex
openrouter-thinkingOpenRouter reasoning wrapper for proxy routes, with unsupported-model/auto skips handled centrallyopenrouter
tool-stream-default-onDefault-on tool_stream wrapper for providers like Z.AI that want tool streaming unless explicitly disabledzai
Each family builder is composed from lower-level public helpers exported from the same package, which you can reach for when a provider needs to go off the common pattern:
  • openclaw/plugin-sdk/provider-model-shared - ProviderReplayFamily, buildProviderReplayFamilyHooks(...), and the raw replay builders (buildOpenAICompatibleReplayPolicy, buildAnthropicReplayPolicyForModel, buildGoogleGeminiReplayPolicy, buildHybridAnthropicOrOpenAIReplayPolicy). Also exports Gemini replay helpers (sanitizeGoogleGeminiReplayHistory, resolveTaggedReasoningOutputMode) and endpoint/model helpers (resolveProviderEndpoint, normalizeProviderId, normalizeGooglePreviewModelId, normalizeNativeXaiModelId).
  • openclaw/plugin-sdk/provider-stream - ProviderStreamFamily, buildProviderStreamFamilyHooks(...), composeProviderStreamWrappers(...), plus the shared OpenAI/Codex wrappers (createOpenAIAttributionHeadersWrapper, createOpenAIFastModeWrapper, createOpenAIServiceTierWrapper, createOpenAIResponsesContextManagementWrapper, createCodexNativeWebSearchWrapper), DeepSeek V4 OpenAI-compatible wrapper (createDeepSeekV4OpenAICompatibleThinkingWrapper), Anthropic Messages thinking prefill cleanup (createAnthropicThinkingPrefillPayloadWrapper), and shared proxy/provider wrappers (createOpenRouterWrapper, createToolStreamWrapper, createMinimaxFastModeWrapper).
  • openclaw/plugin-sdk/provider-tools - ProviderToolCompatFamily, buildProviderToolCompatFamilyHooks("gemini"), underlying Gemini schema helpers (normalizeGeminiToolSchemas, inspectGeminiToolSchemas), and xAI compat helpers (resolveXaiModelCompatPatch(), applyXaiModelCompat(model)). The bundled xAI plugin uses normalizeResolvedModel + contributeResolvedModelCompat with these to keep xAI rules owned by the provider.
Some stream helpers stay provider-local on purpose. @openclaw/anthropic-provider keeps wrapAnthropicProviderStream, resolveAnthropicBetas, resolveAnthropicFastMode, resolveAnthropicServiceTier, and the lower-level Anthropic wrapper builders in its own public api.ts / contract-api.ts seam because they encode Claude OAuth beta handling and context1m gating. The xAI plugin similarly keeps native xAI Responses shaping in its own wrapStreamFn (/fast aliases, default tool_stream, unsupported strict-tool cleanup, xAI-specific reasoning-payload removal).The same package-root pattern also backs @openclaw/openai-provider (provider builders, default-model helpers, realtime provider builders) and @openclaw/openrouter-provider (provider builder plus onboarding/config helpers).
For providers that need a token exchange before each inference call:
prepareRuntimeAuth: async (ctx) => {
  const exchanged = await exchangeToken(ctx.apiKey);
  return {
    apiKey: exchanged.token,
    baseUrl: exchanged.baseUrl,
    expiresAt: exchanged.expiresAt,
  };
},
OpenClaw calls hooks in this order. Most providers only use 2-3: Compatibility-only provider fields that OpenClaw no longer calls, such as ProviderPlugin.capabilities and suppressBuiltInModel, are not listed here.
#HookWhen to use
1catalogModel catalog or base URL defaults
2applyConfigDefaultsProvider-owned global defaults during config materialization
3normalizeModelIdLegacy/preview model-id alias cleanup before lookup
4normalizeTransportProvider-family api / baseUrl cleanup before generic model assembly
5normalizeConfigNormalize models.providers.<id> config
6applyNativeStreamingUsageCompatNative streaming-usage compat rewrites for config providers
7resolveConfigApiKeyProvider-owned env-marker auth resolution
8resolveSyntheticAuthLocal/self-hosted or config-backed synthetic auth
9shouldDeferSyntheticProfileAuthLower synthetic stored-profile placeholders behind env/config auth
10resolveDynamicModelAccept arbitrary upstream model IDs
11prepareDynamicModelAsync metadata fetch before resolving
12normalizeResolvedModelTransport rewrites before the runner
13contributeResolvedModelCompatCompat flags for vendor models behind another compatible transport
14normalizeToolSchemasProvider-owned tool-schema cleanup before registration
15inspectToolSchemasProvider-owned tool-schema diagnostics
16resolveReasoningOutputModeTagged vs native reasoning-output contract
17prepareExtraParamsDefault request params
18createStreamFnFully custom StreamFn transport
19wrapStreamFnCustom headers/body wrappers on the normal stream path
20resolveTransportTurnStateNative per-turn headers/metadata
21resolveWebSocketSessionPolicyNative WS session headers/cool-down
22formatApiKeyCustom runtime token shape
23refreshOAuthCustom OAuth refresh
24buildAuthDoctorHintAuth repair guidance
25matchesContextOverflowErrorProvider-owned overflow detection
26classifyFailoverReasonProvider-owned rate-limit/overload classification
27isCacheTtlEligiblePrompt cache TTL gating
28buildMissingAuthMessageCustom missing-auth hint
29augmentModelCatalogSynthetic forward-compat rows
30resolveThinkingProfileModel-specific /think option set
31isBinaryThinkingBinary thinking on/off compatibility
32supportsXHighThinkingxhigh reasoning support compatibility
33resolveDefaultThinkingLevelDefault /think policy compatibility
34isModernModelRefLive/smoke model matching
35prepareRuntimeAuthToken exchange before inference
36resolveUsageAuthCustom usage credential parsing
37fetchUsageSnapshotCustom usage endpoint
38createEmbeddingProviderProvider-owned embedding adapter for memory/search
39buildReplayPolicyCustom transcript replay/compaction policy
40sanitizeReplayHistoryProvider-specific replay rewrites after generic cleanup
41validateReplayTurnsStrict replay-turn validation before the embedded runner
42onModelSelectedPost-selection callback (e.g. telemetry)
Runtime fallback notes:
  • normalizeConfig checks the matched provider first, then other hook-capable provider plugins until one actually changes the config. If no provider hook rewrites a supported Google-family config entry, the bundled Google config normalizer still applies.
  • resolveConfigApiKey uses the provider hook when exposed. The bundled amazon-bedrock path also has a built-in AWS env-marker resolver here, even though Bedrock runtime auth itself still uses the AWS SDK default chain.
  • resolveSystemPromptContribution lets a provider inject cache-aware system-prompt guidance for a model family. Prefer it over before_prompt_build when the behavior belongs to one provider/model family and should preserve the stable/dynamic cache split.
For detailed descriptions and real-world examples, see Internals: Provider Runtime Hooks.
5

Add extra capabilities (optional)

Step 5: Add extra capabilities

A provider plugin can register speech, realtime transcription, realtime voice, media understanding, image generation, video generation, web fetch, and web search alongside text inference. OpenClaw classifies this as a hybrid-capability plugin - the recommended pattern for company plugins (one plugin per vendor). See Internals: Capability Ownership.Register each capability inside register(api) alongside your existing api.registerProvider(...) call. Pick only the tabs you need:
import {
  assertOkOrThrowProviderError,
  postJsonRequest,
} from "openclaw/plugin-sdk/provider-http";

api.registerSpeechProvider({
  id: "acme-ai",
  label: "Acme Speech",
  isConfigured: ({ config }) => Boolean(config.messages?.tts),
  synthesize: async (req) => {
    const { response, release } = await postJsonRequest({
      url: "https://api.example.com/v1/speech",
      headers: new Headers({ "Content-Type": "application/json" }),
      body: { text: req.text },
      timeoutMs: req.timeoutMs,
      fetchFn: fetch,
      auditContext: "acme speech",
    });
    try {
      await assertOkOrThrowProviderError(response, "Acme Speech API error");
      return {
        audioBuffer: Buffer.from(await response.arrayBuffer()),
        outputFormat: "mp3",
        fileExtension: ".mp3",
        voiceCompatible: false,
      };
    } finally {
      await release();
    }
  },
});
Use assertOkOrThrowProviderError(...) for provider HTTP failures so plugins share capped error-body reads, JSON error parsing, and request-id suffixes.
6

Test

Step 6: Test

src/provider.test.ts
import { describe, it, expect } from "vitest";
// Export your provider config object from index.ts or a dedicated file
import { acmeProvider } from "./provider.js";

describe("acme-ai provider", () => {
  it("resolves dynamic models", () => {
    const model = acmeProvider.resolveDynamicModel!({
      modelId: "acme-beta-v3",
    } as any);
    expect(model.id).toBe("acme-beta-v3");
    expect(model.provider).toBe("acme-ai");
  });

  it("returns catalog when key is available", async () => {
    const result = await acmeProvider.catalog!.run({
      resolveProviderApiKey: () => ({ apiKey: "test-key" }),
    } as any);
    expect(result?.provider?.models).toHaveLength(2);
  });

  it("returns null catalog when no key", async () => {
    const result = await acmeProvider.catalog!.run({
      resolveProviderApiKey: () => ({ apiKey: undefined }),
    } as any);
    expect(result).toBeNull();
  });
});

Publish to ClawHub

Provider plugins publish the same way as any other external code plugin:
clawhub package publish your-org/your-plugin --dry-run
clawhub package publish your-org/your-plugin
Do not use the legacy skill-only publish alias here; plugin packages should use clawhub package publish.

File structure

<bundled-plugin-root>/acme-ai/
├── package.json              # openclaw.providers metadata
├── openclaw.plugin.json      # Manifest with provider auth metadata
├── index.ts                  # definePluginEntry + registerProvider
└── src/
    ├── provider.test.ts      # Tests
    └── usage.ts              # Usage endpoint (optional)

Catalog order reference

catalog.order controls when your catalog merges relative to built-in providers:
OrderWhenUse case
simpleFirst passPlain API-key providers
profileAfter simpleProviders gated on auth profiles
pairedAfter profileSynthesize multiple related entries
lateLast passOverride existing providers (wins on collision)

Next steps