Experimental features in OpenClaw are opt-in preview surfaces. They are behind explicit flags because they still need real-world mileage before they deserve a stable default or a long-lived public contract. Treat them differently from normal config:Documentation Index
Fetch the complete documentation index at: https://docs.openclaw.ai/llms.txt
Use this file to discover all available pages before exploring further.
- Keep them off by default unless the related doc tells you to try one.
- Expect shape and behavior to change faster than stable config.
- Prefer the stable path first when one already exists.
- If you are rolling OpenClaw out broadly, test experimental flags in a smaller environment before baking them into a shared baseline.
Currently documented flags
| Surface | Key | Use it when | More |
|---|---|---|---|
| Local model runtime | agents.defaults.experimental.localModelLean | A smaller or stricter local backend chokes on OpenClaw’s full default tool surface | Local Models |
| Memory search | agents.defaults.memorySearch.experimental.sessionMemory | You want memory_search to index prior session transcripts and accept the extra storage/indexing cost | Memory configuration reference |
| Structured planning tool | tools.experimental.planTool | You want the structured update_plan tool exposed for multi-step work tracking in compatible runtimes and UIs | Gateway configuration reference |
Local model lean mode
agents.defaults.experimental.localModelLean: true is a pressure-release valve for weaker local-model setups. When it is on, OpenClaw drops three default tools — browser, cron, and message — from the agent’s tool surface for every turn. Nothing else changes.
Why these three tools
These three tools have the largest descriptions and the most parameter shapes in the default OpenClaw runtime. On a small-context or stricter OpenAI-compatible backend that is the difference between:- Tool schemas fitting cleanly in the prompt vs. crowding out conversation history.
- The model picking the right tool vs. emitting malformed tool calls because there are too many similar-looking schemas.
- The Chat Completions adapter staying inside the server’s structured-output limits vs. tripping a 400 on tool-call payload size.
read, write, edit, exec, apply_patch, web search/fetch (when configured), memory, and session/agent tools available.
When to turn it on
Enable lean mode when you have already proved the model can talk to the Gateway but full agent turns misbehave. The typical signal chain is:openclaw infer model run --gateway --model <ref> --prompt "Reply with exactly: pong"succeeds.- A normal agent turn fails with malformed tool calls, oversized prompts, or the model ignoring its tools.
- Toggling
localModelLean: trueclears the failure.
When to leave it off
If your backend handles the full default runtime cleanly, leave this off. Lean mode is a workaround, not a default. It exists because some local stacks need a smaller tool surface to behave; hosted models and well-resourced local rigs do not. Lean mode also does not replacetools.profile, tools.allow/tools.deny, or the model compat.supportsTools: false escape hatch. If you need a permanent narrower tool surface for a specific agent, prefer those stable knobs over the experimental flag.
Enable
browser, cron, and message should be absent when lean mode is on.