SGLang
SGLang can serve open-source models via an OpenAI-compatible HTTP API. OpenClaw can connect to SGLang using theopenai-completions API.
OpenClaw can also auto-discover available models from SGLang when you opt
in with SGLANG_API_KEY (any value works if your server does not enforce auth)
and you do not define an explicit models.providers.sglang entry.
Quick start
- Start SGLang with an OpenAI-compatible server.
/v1 endpoints (for example /v1/models,
/v1/chat/completions). SGLang commonly runs on:
http://127.0.0.1:30000/v1
- Opt in (any value works if no auth is configured):
- Run onboarding and choose
SGLang, or set a model directly:
Model discovery (implicit provider)
WhenSGLANG_API_KEY is set (or an auth profile exists) and you do not
define models.providers.sglang, OpenClaw will query:
GET http://127.0.0.1:30000/v1/models
models.providers.sglang explicitly, auto-discovery is skipped and
you must define models manually.
Explicit configuration (manual models)
Use explicit config when:- SGLang runs on a different host/port.
- You want to pin
contextWindow/maxTokensvalues. - Your server requires a real API key (or you want to control headers).
Troubleshooting
- Check the server is reachable:
- If requests fail with auth errors, set a real
SGLANG_API_KEYthat matches your server configuration, or configure the provider explicitly undermodels.providers.sglang.