Documentation Index
Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
Use this file to discover all available pages before exploring further.
models.providers.<id>.localService lets OpenClaw start a provider-owned local
model server on demand. It is provider-level config: when the selected model
belongs to that provider, OpenClaw probes the service, starts the process if the
endpoint is down, waits for readiness, then sends the model request.
Use it for local servers that are expensive to keep running all day, or for
manual setups where model selection should be enough to bring the backend up.
How it works
- A model request resolves to a configured provider.
- If that provider has
localService, OpenClaw probeshealthUrl. - If the probe succeeds, OpenClaw uses the existing server.
- If the probe fails, OpenClaw starts
commandwithargs. - OpenClaw polls readiness until
readyTimeoutMsexpires. - The model request is sent through the normal provider transport.
- If OpenClaw started the process and
idleStopMsis positive, the process is stopped after the last in-flight request has been idle for that long.
Config shape
Fields
command: absolute executable path. Shell lookup is not used.args: process arguments. No shell expansion, pipes, globbing, or quoting rules are applied.cwd: optional working directory for the process.env: optional environment variables merged over the OpenClaw process environment.healthUrl: readiness URL. If omitted, OpenClaw appends/modelstobaseUrl, sohttp://127.0.0.1:8000/v1becomeshttp://127.0.0.1:8000/v1/models.readyTimeoutMs: startup readiness deadline. Default:120000.idleStopMs: idle shutdown delay for OpenClaw-started processes.0or omitted keeps the process alive until OpenClaw exits.
Inferrs example
Inferrs is a custom OpenAI-compatible/v1 backend, so the same local service
API works with the inferrs provider entry.
command with the result of which inferrs on the machine running
OpenClaw.
ds4 example
Operational notes
- One OpenClaw process manages the child it started. Another OpenClaw process that sees the same health URL already live will reuse it without adopting it.
- Startup is serialized per provider command and argument set, so concurrent requests do not spawn duplicate servers for the same config.
- Active streaming responses hold a lease; idle shutdown waits until response body handling is complete.
- Use
timeoutSecondson slow local providers so cold starts and long generations do not hit the default model request timeout. - Use an explicit
healthUrlif your server exposes readiness somewhere other than/v1/models.
Related
Local models
Local model setup, provider choices, and safety guidance.
Inferrs
Run OpenClaw through the inferrs OpenAI-compatible local server.