ds4 serves DeepSeek V4 Flash from a local Metal backend with an OpenAI-compatibleDocumentation Index
Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
Use this file to discover all available pages before exploring further.
/v1 API. OpenClaw connects to ds4
through the generic openai-completions provider family.
ds4 is not a bundled OpenClaw provider plugin. Configure it under
models.providers.ds4, then select ds4/deepseek-v4-flash.
- Provider id:
ds4 - Plugin: none
- API: OpenAI-compatible Chat Completions (
openai-completions) - Suggested base URL:
http://127.0.0.1:18000/v1 - Model id:
deepseek-v4-flash - Tool calls: supported through OpenAI-style
toolsandtool_calls - Reasoning: DeepSeek-style
thinkingandreasoning_effort
Requirements
- macOS with Metal support.
- A working ds4 checkout with
ds4-serverand the DeepSeek V4 Flash GGUF file. - Enough memory for the context you choose. Larger
--ctxvalues allocate more KV memory when the server starts.
Quickstart
Add the OpenClaw provider config
Add the config from Full config, then run a one-shot model
check:
Full config
Use this config when ds4 is already running on127.0.0.1:18000.
contextWindow aligned with the ds4-server --ctx value. Keep maxTokens
aligned with --tokens unless you intentionally want OpenClaw to request less
output than the server default.
On-demand startup
OpenClaw can start ds4 only when ads4/... model is selected. Add
localService to the same provider entry:
command must be an absolute executable path. Shell lookup and ~ expansion are
not used. See Local model services for every
localService field.
Think Max
ds4 applies Think Max only when both conditions are true:ds4-serverstarts with--ctx 393216or higher.- The request uses
reasoning_effort: "max"or the equivalent ds4 effort field.
Test
Start with a direct HTTP check:executionTrace.winnerProviderisds4executionTrace.winnerModelisdeepseek-v4-flashtoolSummary.callsis at least1finalAssistantVisibleTextstarts withtool-ok
Troubleshooting
curl /v1/models cannot connect
curl /v1/models cannot connect
ds4 is not running or not bound to the host and port in
baseUrl. Start
ds4-server, then retry:500 prompt exceeds context
500 prompt exceeds context
The configured
--ctx is too small for the OpenClaw turn. Raise
ds4-server --ctx, then update models.providers.ds4.models[].contextWindow
to match. Full agent turns with tools need substantially more context than a
direct one-message curl request.Think Max does not activate
Think Max does not activate
ds4 only uses Think Max when
--ctx is at least 393216 and the request
asks for reasoning_effort: "max". Smaller contexts fall back to high
reasoning.The first request is slow
The first request is slow
ds4 has a cold Metal residency and model warmup phase. Use
localService.readyTimeoutMs: 300000 when OpenClaw starts the server on
demand.Related
Local model services
Start local model servers on demand before model requests.
Local models
Choose and operate local model backends.
Model providers
Configure provider refs, auth, and failover.
DeepSeek
Native DeepSeek provider behavior and thinking controls.