OpenClaw on Kubernetes
A minimal starting point for running OpenClaw on Kubernetes — not a production-ready deployment. It covers the core resources and is meant to be adapted to your environment.Why not Helm?
OpenClaw is a single container with some config files. The interesting customization is in agent content (markdown files, skills, config overrides), not infrastructure templating. Kustomize handles overlays without the overhead of a Helm chart. If your deployment grows more complex, a Helm chart can be layered on top of these manifests.What you need
- A running Kubernetes cluster (AKS, EKS, GKE, k3s, kind, OpenShift, etc.)
kubectlconnected to your cluster- An API key for at least one model provider
Quick start
./scripts/k8s/deploy.sh --show-token prints the token after deploy.
Local testing with Kind
If you don’t have a cluster, create one locally with Kind:./scripts/k8s/deploy.sh.
Step by step
1) Deploy
Option A — API key in environment (one step):--show-token with either command if you want the token printed to stdout for local testing.
2) Access the gateway
What gets deployed
Customization
Agent instructions
Edit theAGENTS.md in scripts/k8s/manifests/configmap.yaml and redeploy:
Gateway config
Editopenclaw.json in scripts/k8s/manifests/configmap.yaml. See Gateway configuration for the full reference.
Add providers
Re-run with additional keys exported:Custom namespace
Custom image
Edit theimage field in scripts/k8s/manifests/deployment.yaml:
Expose beyond port-forward
The default manifests bind the gateway to loopback inside the pod. That works withkubectl port-forward, but it does not work with a Kubernetes Service or Ingress path that needs to reach the pod IP.
If you want to expose the gateway through an Ingress or load balancer:
- Change the gateway bind in
scripts/k8s/manifests/configmap.yamlfromloopbackto a non-loopback bind that matches your deployment model - Keep gateway auth enabled and use a proper TLS-terminated entrypoint
- Configure the Control UI for remote access using the supported web security model (for example HTTPS/Tailscale Serve and explicit allowed origins when needed)
Re-deploy
Teardown
Architecture notes
- The gateway binds to loopback inside the pod by default, so the included setup is for
kubectl port-forward - No cluster-scoped resources — everything lives in a single namespace
- Security:
readOnlyRootFilesystem,drop: ALLcapabilities, non-root user (UID 1000) - The default config keeps the Control UI on the safer local-access path: loopback bind plus
kubectl port-forwardtohttp://127.0.0.1:18789 - If you move beyond localhost access, use the supported remote model: HTTPS/Tailscale plus the appropriate gateway bind and Control UI origin settings
- Secrets are generated in a temp directory and applied directly to the cluster — no secret material is written to the repo checkout