Quick start and first-run setup
I am stuck, fastest way to get unstuck
I am stuck, fastest way to get unstuck
- Claude Code: https://www.anthropic.com/claude-code/
- OpenAI Codex: https://openai.com/codex/
--install-method git.Tip: ask the agent to plan and supervise the fix (step-by-step), then execute only the
necessary commands. That keeps changes small and easier to audit.If you discover a real bug or fix, please file a GitHub issue or send a PR:
https://github.com/openclaw/openclaw/issues
https://github.com/openclaw/openclaw/pullsStart with these commands (share outputs when asking for help):openclaw status: quick snapshot of gateway/agent health + basic config.openclaw models status: checks provider auth + model availability.openclaw doctor: validates and repairs common config/state issues.
openclaw status --all, openclaw logs --follow,
openclaw gateway status, openclaw health --verbose.Quick debug loop: First 60 seconds if something is broken.
Install docs: Install, Installer flags, Updating.Heartbeat keeps skipping. What do the skip reasons mean?
Heartbeat keeps skipping. What do the skip reasons mean?
quiet-hours: outside the configured active-hours windowempty-heartbeat-file:HEARTBEAT.mdexists but only contains blank/header-only scaffoldingno-tasks-due:HEARTBEAT.mdtask mode is active but none of the task intervals are due yetalerts-disabled: all heartbeat visibility is disabled (showOk,showAlerts, anduseIndicatorare all off)
Recommended way to install and set up OpenClaw
Recommended way to install and set up OpenClaw
pnpm openclaw onboard.How do I open the dashboard after onboarding?
How do I open the dashboard after onboarding?
How do I authenticate the dashboard on localhost vs remote?
How do I authenticate the dashboard on localhost vs remote?
- Open
http://127.0.0.1:18789/. - If it asks for shared-secret auth, paste the configured token or password into Control UI settings.
- Token source:
gateway.auth.token(orOPENCLAW_GATEWAY_TOKEN). - Password source:
gateway.auth.password(orOPENCLAW_GATEWAY_PASSWORD). - If no shared secret is configured yet, generate a token with
openclaw doctor --generate-gateway-token.
- Tailscale Serve (recommended): keep bind loopback, run
openclaw gateway --tailscale serve, openhttps://<magicdns>/. Ifgateway.auth.allowTailscaleistrue, identity headers satisfy Control UI/WebSocket auth (no pasted shared secret, assumes trusted gateway host); HTTP APIs still require shared-secret auth unless you deliberately use private-ingressnoneor trusted-proxy HTTP auth. Bad concurrent Serve auth attempts from the same client are serialized before the failed-auth limiter records them, so the second bad retry can already showretry later. - Tailnet bind: run
openclaw gateway --bind tailnet --token "<token>"(or configure password auth), openhttp://<tailscale-ip>:18789/, then paste the matching shared secret in dashboard settings. - Identity-aware reverse proxy: keep the Gateway behind a non-loopback trusted proxy, configure
gateway.auth.mode: "trusted-proxy", then open the proxy URL. - SSH tunnel:
ssh -N -L 18789:127.0.0.1:18789 user@hostthen openhttp://127.0.0.1:18789/. Shared-secret auth still applies over the tunnel; paste the configured token or password if prompted.
Why are there two exec approval configs for chat approvals?
Why are there two exec approval configs for chat approvals?
approvals.exec: forwards approval prompts to chat destinationschannels.<channel>.execApprovals: makes that channel act as a native approval client for exec approvals
- If the chat already supports commands and replies, same-chat
/approveworks through the shared path. - If a supported native channel can infer approvers safely, OpenClaw now auto-enables DM-first native approvals when
channels.<channel>.execApprovals.enabledis unset or"auto". - When native approval cards/buttons are available, that native UI is the primary path; the agent should only include a manual
/approvecommand if the tool result says chat approvals are unavailable or manual approval is the only path. - Use
approvals.execonly when prompts must also be forwarded to other chats or explicit ops rooms. - Use
channels.<channel>.execApprovals.target: "channel"or"both"only when you explicitly want approval prompts posted back into the originating room/topic. - Plugin approvals are separate again: they use same-chat
/approveby default, optionalapprovals.pluginforwarding, and only some native channels keep plugin-approval-native handling on top.
What runtime do I need?
What runtime do I need?
pnpm is recommended. Bun is not recommended for the Gateway.Does it run on Raspberry Pi?
Does it run on Raspberry Pi?
Any tips for Raspberry Pi installs?
Any tips for Raspberry Pi installs?
- Use a 64-bit OS and keep Node >= 22.
- Prefer the hackable (git) install so you can see logs and update fast.
- Start without channels/skills, then add them one by one.
- If you hit weird binary issues, it is usually an ARM compatibility problem.
It is stuck on wake up my friend / onboarding will not hatch. What now?
It is stuck on wake up my friend / onboarding will not hatch. What now?
- Restart the Gateway:
- Check status + auth:
- If it still hangs, run:
Can I migrate my setup to a new machine (Mac mini) without redoing onboarding?
Can I migrate my setup to a new machine (Mac mini) without redoing onboarding?
- Install OpenClaw on the new machine.
- Copy
$OPENCLAW_STATE_DIR(default:~/.openclaw) from the old machine. - Copy your workspace (default:
~/.openclaw/workspace). - Run
openclaw doctorand restart the Gateway service.
~/.openclaw/ (for example ~/.openclaw/agents/<agentId>/sessions/).Related: Migrating, Where things live on disk,
Agent workspace, Doctor,
Remote mode.Where do I see what is new in the latest version?
Where do I see what is new in the latest version?
Cannot access docs.openclaw.ai (SSL error)
Cannot access docs.openclaw.ai (SSL error)
docs.openclaw.ai via Xfinity
Advanced Security. Disable it or allowlist docs.openclaw.ai, then retry.
Please help us unblock it by reporting here: https://spa.xfinity.com/check_url_status.If you still can’t reach the site, the docs are mirrored on GitHub:
https://github.com/openclaw/openclaw/tree/main/docsDifference between stable and beta
Difference between stable and beta
latest= stablebeta= early build for testing
latest. Maintainers can also
publish straight to latest when needed. That’s why beta and stable can
point at the same version after promotion.See what changed:
https://github.com/openclaw/openclaw/blob/main/CHANGELOG.mdFor install one-liners and the difference between beta and dev, see the accordion below.How do I install the beta version and what is the difference between beta and dev?
How do I install the beta version and what is the difference between beta and dev?
beta (may match latest after promotion).
Dev is the moving head of main (git); when published, it uses the npm dist-tag dev.One-liners (macOS/Linux):How do I try the latest bits?
How do I try the latest bits?
- Dev channel (git checkout):
main branch and updates from source.- Hackable install (from the installer site):
How long does install and onboarding usually take?
How long does install and onboarding usually take?
- Install: 2-5 minutes
- Onboarding: 5-15 minutes depending on how many channels/models you configure
Installer stuck? How do I get more feedback?
Installer stuck? How do I get more feedback?
Windows install says git not found or openclaw not recognized
Windows install says git not found or openclaw not recognized
- Install Git for Windows and make sure
gitis on your PATH. - Close and reopen PowerShell, then re-run the installer.
- Your npm global bin folder is not on PATH.
-
Check the path:
-
Add that directory to your user PATH (no
\binsuffix needed on Windows; on most systems it is%AppData%\npm). - Close and reopen PowerShell after updating PATH.
Windows exec output shows garbled Chinese text - what should I do?
Windows exec output shows garbled Chinese text - what should I do?
system.run/execoutput renders Chinese as mojibake- The same command looks fine in another terminal profile
The docs did not answer my question - how do I get a better answer?
The docs did not answer my question - how do I get a better answer?
How do I install OpenClaw on Linux?
How do I install OpenClaw on Linux?
- Linux quick path + service install: Linux.
- Full walkthrough: Getting Started.
- Installer + updates: Install & updates.
How do I install OpenClaw on a VPS?
How do I install OpenClaw on a VPS?
Where are the cloud/VPS install guides?
Where are the cloud/VPS install guides?
- VPS hosting (all providers in one place)
- Fly.io
- Hetzner
- exe.dev
Can I ask OpenClaw to update itself?
Can I ask OpenClaw to update itself?
What does onboarding actually do?
What does onboarding actually do?
openclaw onboard is the recommended setup path. In local mode it walks you through:- Model/auth setup (provider OAuth, API keys, Anthropic setup-token, plus local model options such as LM Studio)
- Workspace location + bootstrap files
- Gateway settings (bind/port/auth/tailscale)
- Channels (WhatsApp, Telegram, Discord, Mattermost, Signal, iMessage, plus bundled channel plugins like QQ Bot)
- Daemon install (LaunchAgent on macOS; systemd user unit on Linux/WSL2)
- Health checks and skills selection
Do I need a Claude or OpenAI subscription to run this?
Do I need a Claude or OpenAI subscription to run this?
- Anthropic API key: normal Anthropic API billing
- Claude CLI / Claude subscription auth in OpenClaw: Anthropic staff
told us this usage is allowed again, and OpenClaw is treating
claude -pusage as sanctioned for this integration unless Anthropic publishes a new policy
Can I use Claude Max subscription without an API key?
Can I use Claude Max subscription without an API key?
claude -p usage as sanctioned
for this integration unless Anthropic publishes a new policy. If you want
the most predictable server-side setup, use an Anthropic API key instead.Do you support Claude subscription auth (Claude Pro or Max)?
Do you support Claude subscription auth (Claude Pro or Max)?
claude -p usage as sanctioned for this integration
unless Anthropic publishes a new policy.Anthropic setup-token is still available as a supported OpenClaw token path, but OpenClaw now prefers Claude CLI reuse and claude -p when available.
For production or multi-user workloads, Anthropic API key auth is still the
safer, more predictable choice. If you want other subscription-style hosted
options in OpenClaw, see OpenAI, Qwen / Model
Cloud, MiniMax, and GLM
Models.Why am I seeing HTTP 429 rate_limit_error from Anthropic?
Why am I seeing HTTP 429 rate_limit_error from Anthropic?
Extra usage is required for long context requests, the request is trying to use
Anthropic’s 1M context beta (context1m: true). That only works when your
credential is eligible for long-context billing (API key billing or the
OpenClaw Claude-login path with Extra Usage enabled).Tip: set a fallback model so OpenClaw can keep replying while a provider is rate-limited.
See Models, OAuth, and
/gateway/troubleshooting#anthropic-429-extra-usage-required-for-long-context.Is AWS Bedrock supported?
Is AWS Bedrock supported?
amazon-bedrock provider; otherwise you can explicitly enable plugins.entries.amazon-bedrock.config.discovery.enabled or add a manual provider entry. See Amazon Bedrock and Model providers. If you prefer a managed key flow, an OpenAI-compatible proxy in front of Bedrock is still a valid option.How does Codex auth work?
How does Codex auth work?
openai-codex/gpt-5.5 for Codex OAuth through the default PI runner. Use
openai/gpt-5.4 for current direct OpenAI API-key access. GPT-5.5 direct
API-key access is supported once OpenAI enables it on the public API; today
GPT-5.5 uses subscription/OAuth via openai-codex/gpt-5.5 or native Codex
app-server runs with openai/gpt-5.5 and embeddedHarness.runtime: "codex".
See Model providers and Onboarding (CLI).Why does OpenClaw still mention openai-codex?
Why does OpenClaw still mention openai-codex?
openai-codex is the provider and auth-profile id for ChatGPT/Codex OAuth.
It is also the explicit PI model prefix for Codex OAuth:openai/gpt-5.4= current direct OpenAI API-key route in PIopenai/gpt-5.5= future direct API-key route once OpenAI enables GPT-5.5 on the APIopenai-codex/gpt-5.5= Codex OAuth route in PIopenai/gpt-5.5+embeddedHarness.runtime: "codex"= native Codex app-server routeopenai-codex:...= auth profile id, not a model ref
OPENAI_API_KEY. If you want ChatGPT/Codex subscription auth, sign in with
openclaw models auth login --provider openai-codex and use
openai-codex/* model refs for PI runs.Why can Codex OAuth limits differ from ChatGPT web?
Why can Codex OAuth limits differ from ChatGPT web?
openclaw models status, but it does not invent or normalize ChatGPT-web
entitlements into direct API access. If you want the direct OpenAI Platform
billing/limit path, use openai/* with an API key.Do you support OpenAI subscription auth (Codex OAuth)?
Do you support OpenAI subscription auth (Codex OAuth)?
How do I set up Gemini CLI OAuth?
How do I set up Gemini CLI OAuth?
openclaw.json.Steps:- Install Gemini CLI locally so
geminiis onPATH- Homebrew:
brew install gemini-cli - npm:
npm install -g @google/gemini-cli
- Homebrew:
- Enable the plugin:
openclaw plugins enable google - Login:
openclaw models auth login --provider google-gemini-cli --set-default - Default model after login:
google-gemini-cli/gemini-3-flash-preview - If requests fail, set
GOOGLE_CLOUD_PROJECTorGOOGLE_CLOUD_PROJECT_IDon the gateway host
Is a local model OK for casual chats?
Is a local model OK for casual chats?
How do I keep hosted model traffic in a specific region?
How do I keep hosted model traffic in a specific region?
models.mode: "merge" so fallbacks stay available while respecting the regioned provider you select.Do I have to buy a Mac Mini to install this?
Do I have to buy a Mac Mini to install this?
Do I need a Mac mini for iMessage support?
Do I need a Mac mini for iMessage support?
- Run the Gateway on Linux/VPS, and run the BlueBubbles server on any Mac signed into Messages.
- Run everything on the Mac if you want the simplest single-machine setup.
If I buy a Mac mini to run OpenClaw, can I connect it to my MacBook Pro?
If I buy a Mac mini to run OpenClaw, can I connect it to my MacBook Pro?
system.run on that device.Common pattern:- Gateway on the Mac mini (always-on).
- MacBook Pro runs the macOS app or a node host and pairs to the Gateway.
- Use
openclaw nodes status/openclaw nodes listto see it.
Can I use Bun?
Can I use Bun?
Telegram: what goes in allowFrom?
Telegram: what goes in allowFrom?
channels.telegram.allowFrom is the human sender’s Telegram user ID (numeric). It is not the bot username.Setup asks for numeric user IDs only. If you already have legacy @username entries in config, openclaw doctor --fix can try to resolve them.Safer (no third-party bot):- DM your bot, then run
openclaw logs --followand readfrom.id.
- DM your bot, then call
https://api.telegram.org/bot<bot_token>/getUpdatesand readmessage.from.id.
- DM
@userinfobotor@getidsbot.
Can multiple people use one WhatsApp number with different OpenClaw instances?
Can multiple people use one WhatsApp number with different OpenClaw instances?
kind: "direct", sender E.164 like +15551234567) to a different agentId, so each person gets their own workspace and session store. Replies still come from the same WhatsApp account, and DM access control (channels.whatsapp.dmPolicy / channels.whatsapp.allowFrom) is global per WhatsApp account. See Multi-Agent Routing and WhatsApp.Can I run a "fast chat" agent and an "Opus for coding" agent?
Can I run a "fast chat" agent and an "Opus for coding" agent?
Does Homebrew work on Linux?
Does Homebrew work on Linux?
/home/linuxbrew/.linuxbrew/bin (or your brew prefix) so brew-installed tools resolve in non-login shells.
Recent builds also prepend common user bin dirs on Linux systemd services (for example ~/.local/bin, ~/.npm-global/bin, ~/.local/share/pnpm, ~/.bun/bin) and honor PNPM_HOME, NPM_CONFIG_PREFIX, BUN_INSTALL, VOLTA_HOME, ASDF_DATA_DIR, NVM_DIR, and FNM_DIR when set.Difference between the hackable git install and npm install
Difference between the hackable git install and npm install
- Hackable (git) install: full source checkout, editable, best for contributors. You run builds locally and can patch code/docs.
- npm install: global CLI install, no repo, best for “just run it.” Updates come from npm dist-tags.
Can I switch between npm and git installs later?
Can I switch between npm and git installs later?
~/.openclaw) and workspace (~/.openclaw/workspace) stay untouched.From npm to git:--repair in automation).Backup tips: see Backup strategy.Should I run the Gateway on my laptop or a VPS?
Should I run the Gateway on my laptop or a VPS?
- Pros: no server cost, direct access to local files, live browser window.
- Cons: sleep/network drops = disconnects, OS updates/reboots interrupt, must stay awake.
- Pros: always-on, stable network, no laptop sleep issues, easier to keep running.
- Cons: often run headless (use screenshots), remote file access only, you must SSH for updates.
How important is it to run OpenClaw on a dedicated machine?
How important is it to run OpenClaw on a dedicated machine?
- Dedicated host (VPS/Mac mini/Pi): always-on, fewer sleep/reboot interruptions, cleaner permissions, easier to keep running.
- Shared laptop/desktop: totally fine for testing and active use, but expect pauses when the machine sleeps or updates.
What are the minimum VPS requirements and recommended OS?
What are the minimum VPS requirements and recommended OS?
- Absolute minimum: 1 vCPU, 1GB RAM, ~500MB disk.
- Recommended: 1-2 vCPU, 2GB RAM or more for headroom (logs, media, multiple channels). Node tools and browser automation can be resource hungry.
Can I run OpenClaw in a VM and what are the requirements?
Can I run OpenClaw in a VM and what are the requirements?
- Absolute minimum: 1 vCPU, 1GB RAM.
- Recommended: 2GB RAM or more if you run multiple channels, browser automation, or media tools.
- OS: Ubuntu LTS or another modern Debian/Ubuntu.
Related
- FAQ — the main FAQ (models, sessions, gateway, security, more)
- Install overview
- Getting started
- Troubleshooting