SGLang can serve open-source models via an OpenAI-compatible HTTP API. OpenClaw can connect to SGLang using theDocumentation Index
Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
Use this file to discover all available pages before exploring further.
openai-completions API.
OpenClaw can also auto-discover available models from SGLang when you opt
in with SGLANG_API_KEY (any value works if your server does not enforce auth)
and you do not define an explicit models.providers.sglang entry.
OpenClaw treats sglang as a local OpenAI-compatible provider that supports
streamed usage accounting, so status/context token counts can update from
stream_options.include_usage responses.
Getting started
Start SGLang
Launch SGLang with an OpenAI-compatible server. Your base URL should expose
/v1 endpoints (for example /v1/models, /v1/chat/completions). SGLang
commonly runs on:http://127.0.0.1:30000/v1
Model discovery (implicit provider)
WhenSGLANG_API_KEY is set (or an auth profile exists) and you do not
define models.providers.sglang, OpenClaw will query:
GET http://127.0.0.1:30000/v1/models
If you set
models.providers.sglang explicitly, auto-discovery is skipped and
you must define models manually.Explicit configuration (manual models)
Use explicit config when:- SGLang runs on a different host/port.
- You want to pin
contextWindow/maxTokensvalues. - Your server requires a real API key (or you want to control headers).
Advanced configuration
Proxy-style behavior
Proxy-style behavior
SGLang is treated as a proxy-style OpenAI-compatible
/v1 backend, not a
native OpenAI endpoint.| Behavior | SGLang |
|---|---|
| OpenAI-only request shaping | Not applied |
service_tier, Responses store, prompt-cache hints | Not sent |
| Reasoning-compat payload shaping | Not applied |
Hidden attribution headers (originator, version, User-Agent) | Not injected on custom SGLang base URLs |
Troubleshooting
Troubleshooting
Server not reachableVerify the server is running and responding:Auth errorsIf requests fail with auth errors, set a real
SGLANG_API_KEY that matches
your server configuration, or configure the provider explicitly under
models.providers.sglang.Related
Model selection
Choosing providers, model refs, and failover behavior.
Configuration reference
Full config schema including provider entries.