Skip to main content
Cerebras provides high-speed OpenAI-compatible inference.
PropertyValue
Providercerebras
AuthCEREBRAS_API_KEY
APIOpenAI-compatible
Base URLhttps://api.cerebras.ai/v1

Getting Started

1

Get an API key

Create an API key in the Cerebras Cloud Console.
2

Run onboarding

openclaw onboard --auth-choice cerebras-api-key
3

Verify models are available

openclaw models list --provider cerebras

Non-Interactive Setup

openclaw onboard --non-interactive \
  --mode local \
  --auth-choice cerebras-api-key \
  --cerebras-api-key "$CEREBRAS_API_KEY"

Built-In Catalog

OpenClaw ships a static Cerebras catalog for the public OpenAI-compatible endpoint:
Model refNameNotes
cerebras/zai-glm-4.7Z.ai GLM 4.7Default model; preview reasoning model
cerebras/gpt-oss-120bGPT OSS 120BProduction reasoning model
cerebras/qwen-3-235b-a22b-instruct-2507Qwen 3 235B InstructPreview non-reasoning model
cerebras/llama3.1-8bLlama 3.1 8BProduction speed-focused model
Cerebras marks zai-glm-4.7 and qwen-3-235b-a22b-instruct-2507 as preview models, and llama3.1-8b / qwen-3-235b-a22b-instruct-2507 are documented for deprecation on May 27, 2026. Check Cerebras’ supported-models page before relying on them for production.

Manual Config

The bundled plugin usually means you only need the API key. Use explicit models.providers.cerebras config when you want to override model metadata:
{
  env: { CEREBRAS_API_KEY: "sk-..." },
  agents: {
    defaults: {
      model: { primary: "cerebras/zai-glm-4.7" },
    },
  },
  models: {
    mode: "merge",
    providers: {
      cerebras: {
        baseUrl: "https://api.cerebras.ai/v1",
        apiKey: "${CEREBRAS_API_KEY}",
        api: "openai-completions",
        models: [
          { id: "zai-glm-4.7", name: "Z.ai GLM 4.7" },
          { id: "gpt-oss-120b", name: "GPT OSS 120B" },
        ],
      },
    },
  },
}
If the Gateway runs as a daemon (launchd/systemd), make sure CEREBRAS_API_KEY is available to that process, for example in ~/.openclaw/.env or through env.shellEnv.