Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt

Use this file to discover all available pages before exploring further.

Anthropic builds the Claude model family. OpenClaw supports two auth routes:
  • API key — direct Anthropic API access with usage-based billing (anthropic/* models)
  • Claude CLI — reuse an existing Claude CLI login on the same host
Anthropic staff told us OpenClaw-style Claude CLI usage is allowed again, so OpenClaw treats Claude CLI reuse and claude -p usage as sanctioned unless Anthropic publishes a new policy.For long-lived gateway hosts, Anthropic API keys are still the clearest and most predictable production path.Anthropic’s current public docs:

Getting started

Best for: standard API access and usage-based billing.
1

Get your API key

Create an API key in the Anthropic Console.
2

Run onboarding

openclaw onboard
# choose: Anthropic API key
Or pass the key directly:
openclaw onboard --anthropic-api-key "$ANTHROPIC_API_KEY"
3

Verify the model is available

openclaw models list --provider anthropic

Config example

{
  env: { ANTHROPIC_API_KEY: "sk-ant-..." },
  agents: { defaults: { model: { primary: "anthropic/claude-opus-4-6" } } },
}

Thinking defaults (Claude 4.6)

Claude 4.6 models default to adaptive thinking in OpenClaw when no explicit thinking level is set. Override per-message with /think:<level> or in model params:
{
  agents: {
    defaults: {
      models: {
        "anthropic/claude-opus-4-6": {
          params: { thinking: "adaptive" },
        },
      },
    },
  },
}
Related Anthropic docs:

Prompt caching

OpenClaw supports Anthropic’s prompt caching feature for API-key auth.
ValueCache durationDescription
"short" (default)5 minutesApplied automatically for API-key auth
"long"1 hourExtended cache
"none"No cachingDisable prompt caching
{
  agents: {
    defaults: {
      models: {
        "anthropic/claude-opus-4-6": {
          params: { cacheRetention: "long" },
        },
      },
    },
  },
}
Use model-level params as your baseline, then override specific agents via agents.list[].params:
{
  agents: {
    defaults: {
      model: { primary: "anthropic/claude-opus-4-6" },
      models: {
        "anthropic/claude-opus-4-6": {
          params: { cacheRetention: "long" },
        },
      },
    },
    list: [
      { id: "research", default: true },
      { id: "alerts", params: { cacheRetention: "none" } },
    ],
  },
}
Config merge order:
  1. agents.defaults.models["provider/model"].params
  2. agents.list[].params (matching id, overrides by key)
This lets one agent keep a long-lived cache while another agent on the same model disables caching for bursty/low-reuse traffic.
  • Anthropic Claude models on Bedrock (amazon-bedrock/*anthropic.claude*) accept cacheRetention pass-through when configured.
  • Non-Anthropic Bedrock models are forced to cacheRetention: "none" at runtime.
  • API-key smart defaults also seed cacheRetention: "short" for Claude-on-Bedrock refs when no explicit value is set.

Advanced configuration

OpenClaw’s shared /fast toggle supports direct Anthropic traffic (API-key and OAuth to api.anthropic.com).
CommandMaps to
/fast onservice_tier: "auto"
/fast offservice_tier: "standard_only"
{
  agents: {
    defaults: {
      models: {
        "anthropic/claude-sonnet-4-6": {
          params: { fastMode: true },
        },
      },
    },
  },
}
  • Only injected for direct api.anthropic.com requests. Proxy routes leave service_tier untouched.
  • Explicit serviceTier or service_tier params override /fast when both are set.
  • On accounts without Priority Tier capacity, service_tier: "auto" may resolve to standard.
The bundled Anthropic plugin registers image and PDF understanding. OpenClaw auto-resolves media capabilities from the configured Anthropic auth — no additional config is needed.
PropertyValue
Default modelclaude-opus-4-6
Supported inputImages, PDF documents
When an image or PDF is attached to a conversation, OpenClaw automatically routes it through the Anthropic media understanding provider.
Anthropic’s 1M context window is beta-gated. Enable it per model:
{
  agents: {
    defaults: {
      models: {
        "anthropic/claude-opus-4-6": {
          params: { context1m: true },
        },
      },
    },
  },
}
OpenClaw maps this to anthropic-beta: context-1m-2025-08-07 on requests.params.context1m: true also applies to the Claude CLI backend (claude-cli/*) for eligible Opus and Sonnet models, expanding the runtime context window for those CLI sessions to match the direct-API behavior.
Requires long-context access on your Anthropic credential. Legacy token auth (sk-ant-oat-*) is rejected for 1M context requests — OpenClaw logs a warning and falls back to the standard context window.
anthropic/claude-opus-4.7 and its claude-cli variant have a 1M context window by default — no params.context1m: true needed.

Troubleshooting

Anthropic token auth expires and can be revoked. For new setups, use an Anthropic API key instead.
Anthropic auth is per agent — new agents do not inherit the main agent’s keys. Re-run onboarding for that agent (or configure an API key on the gateway host), then verify with openclaw models status.
Run openclaw models status to see which auth profile is active. Re-run onboarding, or configure an API key for that profile path.
Check openclaw models status --json for auth.unusableProfiles. Anthropic rate-limit cooldowns can be model-scoped, so a sibling Anthropic model may still be usable. Add another Anthropic profile or wait for cooldown.
More help: Troubleshooting and FAQ.

Model selection

Choosing providers, model refs, and failover behavior.

CLI backends

Claude CLI backend setup and runtime details.

Prompt caching

How prompt caching works across providers.

OAuth and auth

Auth details and credential reuse rules.