Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt

Use this file to discover all available pages before exploring further.

SGLang can serve open-source models via an OpenAI-compatible HTTP API. OpenClaw can connect to SGLang using the openai-completions API. OpenClaw can also auto-discover available models from SGLang when you opt in with SGLANG_API_KEY (any value works if your server does not enforce auth) and you do not define an explicit models.providers.sglang entry. OpenClaw treats sglang as a local OpenAI-compatible provider that supports streamed usage accounting, so status/context token counts can update from stream_options.include_usage responses.

Getting started

1

Start SGLang

Launch SGLang with an OpenAI-compatible server. Your base URL should expose /v1 endpoints (for example /v1/models, /v1/chat/completions). SGLang commonly runs on:
  • http://127.0.0.1:30000/v1
2

Set an API key

Any value works if no auth is configured on your server:
export SGLANG_API_KEY="sglang-local"
3

Run onboarding or set a model directly

openclaw onboard
Or configure the model manually:
{
  agents: {
    defaults: {
      model: { primary: "sglang/your-model-id" },
    },
  },
}

Model discovery (implicit provider)

When SGLANG_API_KEY is set (or an auth profile exists) and you do not define models.providers.sglang, OpenClaw will query:
  • GET http://127.0.0.1:30000/v1/models
and convert the returned IDs into model entries.
If you set models.providers.sglang explicitly, auto-discovery is skipped and you must define models manually.

Explicit configuration (manual models)

Use explicit config when:
  • SGLang runs on a different host/port.
  • You want to pin contextWindow/maxTokens values.
  • Your server requires a real API key (or you want to control headers).
{
  models: {
    providers: {
      sglang: {
        baseUrl: "http://127.0.0.1:30000/v1",
        apiKey: "${SGLANG_API_KEY}",
        api: "openai-completions",
        models: [
          {
            id: "your-model-id",
            name: "Local SGLang Model",
            reasoning: false,
            input: ["text"],
            cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
            contextWindow: 128000,
            maxTokens: 8192,
          },
        ],
      },
    },
  },
}

Advanced configuration

SGLang is treated as a proxy-style OpenAI-compatible /v1 backend, not a native OpenAI endpoint.
BehaviorSGLang
OpenAI-only request shapingNot applied
service_tier, Responses store, prompt-cache hintsNot sent
Reasoning-compat payload shapingNot applied
Hidden attribution headers (originator, version, User-Agent)Not injected on custom SGLang base URLs
Server not reachableVerify the server is running and responding:
curl http://127.0.0.1:30000/v1/models
Auth errorsIf requests fail with auth errors, set a real SGLANG_API_KEY that matches your server configuration, or configure the provider explicitly under models.providers.sglang.
If you run SGLang without authentication, any non-empty value for SGLANG_API_KEY is sufficient to opt in to model discovery.

Model selection

Choosing providers, model refs, and failover behavior.

Configuration reference

Full config schema including provider entries.