常见问题(FAQ)
快速答案 + 更深入的实战排障(本地开发、VPS、多 agent、OAuth/API keys、模型 failover)。运行时诊断请看 Troubleshooting。完整配置参考见 Configuration。目录
- Quick start and first-run setup
- Im stuck whats the fastest way to get unstuck?
- What’s the recommended way to install and set up OpenClaw?
- How do I open the dashboard after onboarding?
- How do I authenticate the dashboard (token) on localhost vs remote?
- What runtime do I need?
- Does it run on Raspberry Pi?
- Any tips for Raspberry Pi installs?
- It is stuck on “wake up my friend” / onboarding will not hatch. What now?
- Can I migrate my setup to a new machine (Mac mini) without redoing onboarding?
- Where do I see what’s new in the latest version?
- I can’t access docs.openclaw.ai (SSL error). What now?
- What’s the difference between stable and beta?
- How do I install the beta version, and what’s the difference between beta and dev?
- How do I try the latest bits?
- How long does install and onboarding usually take?
- Installer stuck? How do I get more feedback?
- Windows install says git not found or openclaw not recognized
- The docs didn’t answer my question - how do I get a better answer?
- How do I install OpenClaw on Linux?
- How do I install OpenClaw on a VPS?
- Where are the cloud/VPS install guides?
- Can I ask OpenClaw to update itself?
- What does the onboarding wizard actually do?
- Do I need a Claude or OpenAI subscription to run this?
- Can I use Claude Max subscription without an API key
- How does Anthropic “setup-token” auth work?
- Where do I find an Anthropic setup-token?
- Do you support Claude subscription auth (Claude Code OAuth)?
- Why am I seeing
HTTP 429: rate_limit_errorfrom Anthropic? - Is AWS Bedrock supported?
- How does Codex auth work?
- Do you support OpenAI subscription auth (Codex OAuth)?
- How do I set up Gemini CLI OAuth
- Is a local model OK for casual chats?
- How do I keep hosted model traffic in a specific region?
- Do I have to buy a Mac Mini to install this?
- Do I need a Mac mini for iMessage support?
- If I buy a Mac mini to run OpenClaw, can I connect it to my MacBook Pro?
- Can I use Bun?
- Telegram: what goes in
allowFrom? - Can multiple people use one WhatsApp number with different OpenClaw instances?
- Can I run a “fast chat” agent and an “Opus for coding” agent?
- Does Homebrew work on Linux?
- What’s the difference between the hackable (git) install and npm install?
- Can I switch between npm and git installs later?
- Should I run the Gateway on my laptop or a VPS?
- How important is it to run OpenClaw on a dedicated machine?
- What are the minimum VPS requirements and recommended OS?
- Can I run OpenClaw in a VM and what are the requirements
- What is OpenClaw?
- Skills and automation
- How do I customize skills without keeping the repo dirty?
- Can I load skills from a custom folder?
- How can I use different models for different tasks?
- The bot freezes while doing heavy work. How do I offload that?
- Cron or reminders do not fire. What should I check?
- How do I install skills on Linux?
- Can OpenClaw run tasks on a schedule or continuously in the background?
- Can I run Apple/macOS-only skills from Linux?
- Do you have a Notion or HeyGen integration?
- How do I install the Chrome extension for browser takeover?
- Sandboxing and memory
- Where things live on disk
- Is all data used with OpenClaw saved locally?
- Where does OpenClaw store its data?
- Where should AGENTS.md / SOUL.md / USER.md / MEMORY.md live?
- What’s the recommended backup strategy?
- How do I completely uninstall OpenClaw?
- Can agents work outside the workspace?
- I’m in remote mode - where is the session store?
- Config basics
- What format is the config? Where is it?
- I set
gateway.bind: "lan"(or"tailnet") and now nothing listens / the UI says unauthorized - Why do I need a token on localhost now?
- Do I have to restart after changing config?
- How do I enable web search (and web fetch)?
- config.apply wiped my config. How do I recover and avoid this?
- How do I run a central Gateway with specialized workers across devices?
- Can the OpenClaw browser run headless?
- How do I use Brave for browser control?
- Remote gateways + nodes
- How do commands propagate between Telegram, the gateway, and nodes?
- How can my agent access my computer if the Gateway is hosted remotely?
- Tailscale is connected but I get no replies. What now?
- Can two OpenClaw instances talk to each other (local + VPS)?
- Do I need separate VPSes for multiple agents
- Is there a benefit to using a node on my personal laptop instead of SSH from a VPS?
- Do nodes run a gateway service?
- Is there an API / RPC way to apply config?
- What’s a minimal “sane” config for a first install?
- How do I set up Tailscale on a VPS and connect from my Mac?
- How do I connect a Mac node to a remote Gateway (Tailscale Serve)?
- Should I install on a second laptop or just add a node?
- Env vars and .env loading
- Sessions & multiple chats
- How do I start a fresh conversation?
- Do sessions reset automatically if I never send
/new? - Is there a way to make a team of OpenClaw instances one CEO and many agents
- Why did context get truncated mid-task? How do I prevent it?
- How do I completely reset OpenClaw but keep it installed?
- I’m getting “context too large” errors - how do I reset or compact?
- Why am I seeing “LLM request rejected: messages.N.content.X.tool_use.input: Field required”?
- Why am I getting heartbeat messages every 30 minutes?
- Do I need to add a “bot account” to a WhatsApp group?
- How do I get the JID of a WhatsApp group?
- Why doesn’t OpenClaw reply in a group?
- Do groups/threads share context with DMs?
- How many workspaces and agents can I create?
- Can I run multiple bots or chats at the same time (Slack), and how should I set that up?
- Models: defaults, selection, aliases, switching
- What is the “default model”?
- What model do you recommend?
- How do I switch models without wiping my config?
- Can I use self-hosted models (llama.cpp, vLLM, Ollama)?
- What do OpenClaw, Flawd, and Krill use for models?
- How do I switch models on the fly (without restarting)?
- Can I use GPT 5.2 for daily tasks and Codex 5.2 for coding
- Why do I see “Model … is not allowed” and then no reply?
- Why do I see “Unknown model: minimax/MiniMax-M2.1”?
- Can I use MiniMax as my default and OpenAI for complex tasks?
- Are opus / sonnet / gpt built‑in shortcuts?
- How do I define/override model shortcuts (aliases)?
- How do I add models from other providers like OpenRouter or Z.AI?
- Model failover and “All models failed”
- Auth profiles: what they are and how to manage them
- Gateway: ports, “already running”, and remote mode
- What port does the Gateway use?
- Why does
openclaw gateway statussayRuntime: runningbutRPC probe: failed? - Why does
openclaw gateway statusshowConfig (cli)andConfig (service)different? - What does “another gateway instance is already listening” mean?
- How do I run OpenClaw in remote mode (client connects to a Gateway elsewhere)?
- The Control UI says “unauthorized” (or keeps reconnecting). What now?
- I set
gateway.bind: "tailnet"but it can’t bind / nothing listens - Can I run multiple Gateways on the same host?
- What does “invalid handshake” / code 1008 mean?
- Logging and debugging
- Where are logs?
- How do I start/stop/restart the Gateway service?
- I closed my terminal on Windows - how do I restart OpenClaw?
- The Gateway is up but replies never arrive. What should I check?
- “Disconnected from gateway: no reason” - what now?
- Telegram setMyCommands fails with network errors. What should I check?
- TUI shows no output. What should I check?
- How do I completely stop then start the Gateway?
- ELI5:
openclaw gateway restartvsopenclaw gateway - What’s the fastest way to get more details when something fails?
- Media & attachments
- Security and access control
- Is it safe to expose OpenClaw to inbound DMs?
- Is prompt injection only a concern for public bots?
- Should my bot have its own email GitHub account or phone number
- Can I give it autonomy over my text messages and is that safe
- Can I use cheaper models for personal assistant tasks?
- I ran
/startin Telegram but didn’t get a pairing code - WhatsApp: will it message my contacts? How does pairing work?
- Chat commands, aborting tasks, and “it won’t stop”
First 60 seconds if something’s broken
-
快速状态(首个检查)
本地快速摘要:OS + 更新、gateway/服务可达性、agents/sessions、provider 配置 + 运行时问题(当 gateway 可达时)。
-
可粘贴报告(安全分享)
只读诊断 + 日志尾(token 已脱敏)。
-
守护进程 + 端口状态
显示 supervisor 运行状态 vs RPC 可达性、探测目标 URL,以及服务可能使用的配置。
-
深入探测
运行 gateway 健康检查 + provider 探测(需要可达的 gateway)。见 Health。
-
跟随最新日志
如果 RPC 不可用,回退到:文件日志与服务日志是分开的;见 Logging 与 Troubleshooting。
-
运行 doctor(修复)
修复/迁移配置与状态 + 运行健康检查。见 Doctor。
-
Gateway 快照
向运行中的 gateway 请求完整快照(仅 WS)。见 Health。
Quick start and first-run setup
Im stuck whats the fastest way to get unstuck
使用一个能 看到你机器 的本地 AI agent。这比在 Discord 里求助更有效,因为多数“卡住”都源于 本地配置或环境问题,远程协助者无法直接检查。- Claude Code: https://www.anthropic.com/claude-code/
- OpenAI Codex: https://openai.com/codex/
--install-method git 重新运行安装器切回稳定版。
提示:让 agent 规划并监督 修复流程(逐步执行),然后只执行必要命令。这样变更更小、也更易审计。
如果你发现了真实 bug 或修复,请提 GitHub issue 或发 PR:
https://github.com/openclaw/openclaw/issues
https://github.com/openclaw/openclaw/pulls
先从这些命令开始(求助时分享输出):
openclaw status:gateway/agent 健康 + 基础配置的快速快照。openclaw models status:检查 provider 认证 + 模型可用性。openclaw doctor:验证并修复常见配置/状态问题。
openclaw status --all, openclaw logs --follow,
openclaw gateway status, openclaw health --verbose。
快速调试循环:First 60 seconds if something’s broken。
安装文档:Install, Installer flags, Updating。
Whats the recommended way to install and set up OpenClaw
The repo recommends running from source and using the onboarding wizard:pnpm openclaw onboard.
How do I open the dashboard after onboarding
The wizard now opens your browser with a tokenized dashboard URL right after onboarding and also prints the full link (with token) in the summary. Keep that tab open; if it didn’t launch, copy/paste the printed URL on the same machine. Tokens stay local to your host-nothing is fetched from the browser.How do I authenticate the dashboard token on localhost vs remote
Localhost (same machine):- Open
http://127.0.0.1:18789/. - If it asks for auth, run
openclaw dashboardand use the tokenized link (?token=...). - The token is the same value as
gateway.auth.token(orOPENCLAW_GATEWAY_TOKEN) and is stored by the UI after first load.
- Tailscale Serve (recommended): keep bind loopback, run
openclaw gateway --tailscale serve, openhttps://<magicdns>/. Ifgateway.auth.allowTailscaleistrue, identity headers satisfy auth (no token). - Tailnet bind: run
openclaw gateway --bind tailnet --token "<token>", openhttp://<tailscale-ip>:18789/, paste token in dashboard settings. - SSH tunnel:
ssh -N -L 18789:127.0.0.1:18789 user@hostthen openhttp://127.0.0.1:18789/?token=...fromopenclaw dashboard.
What runtime do I need
Node >= 22 is required.pnpm is recommended. Bun is not recommended for the Gateway.
Does it run on Raspberry Pi
Yes. The Gateway is lightweight - docs list 512MB-1GB RAM, 1 core, and about 500MB disk as enough for personal use, and note that a Raspberry Pi 4 can run it. If you want extra headroom (logs, media, other services), 2GB is recommended, but it’s not a hard minimum. Tip: a small Pi/VPS can host the Gateway, and you can pair nodes on your laptop/phone for local screen/camera/canvas or command execution. See Nodes.Any tips for Raspberry Pi installs
Short version: it works, but expect rough edges.- Use a 64-bit OS and keep Node >= 22.
- Prefer the hackable (git) install so you can see logs and update fast.
- Start without channels/skills, then add them one by one.
- If you hit weird binary issues, it is usually an ARM compatibility problem.
It is stuck on wake up my friend onboarding will not hatch What now
That screen depends on the Gateway being reachable and authenticated. The TUI also sends “Wake up, my friend!” automatically on first hatch. If you see that line with no reply and tokens stay at 0, the agent never ran.- Restart the Gateway:
- Check status + auth:
- If it still hangs, run:
Can I migrate my setup to a new machine Mac mini without redoing onboarding
Yes. Copy the state directory and workspace, then run Doctor once. This keeps your bot “exactly the same” (memory, session history, auth, and channel state) as long as you copy both locations:- Install OpenClaw on the new machine.
- Copy
$OPENCLAW_STATE_DIR(default:~/.openclaw) from the old machine. - Copy your workspace (default:
~/.openclaw/workspace). - Run
openclaw doctorand restart the Gateway service.
~/.openclaw/ (for example ~/.openclaw/agents/<agentId>/sessions/).
Related: Migrating, Where things live on disk,
Agent workspace, Doctor,
Remote mode.
Where do I see whats new in the latest version
Check the GitHub changelog:https://github.com/openclaw/openclaw/blob/main/CHANGELOG.md Newest entries are at the top. If the top section is marked Unreleased, the next dated section is the latest shipped version. Entries are grouped by Highlights, Changes, and Fixes (plus docs/other sections when needed).
I cant access docs.openclaw.ai SSL error What now
Some Comcast/Xfinity connections incorrectly blockdocs.openclaw.ai via Xfinity
Advanced Security. Disable it or allowlist docs.openclaw.ai, then retry. More
detail: Troubleshooting.
Please help us unblock it by reporting here: https://spa.xfinity.com/check_url_status.
If you still can’t reach the site, the docs are mirrored on GitHub:
https://github.com/openclaw/openclaw/tree/main/docs
Whats the difference between stable and beta
Stable and beta are npm dist‑tags, not separate code lines:latest= stablebeta= early build for testing
latest. That’s why beta and stable can point at the
same version.
See what changed:https://github.com/openclaw/openclaw/blob/main/CHANGELOG.md
How do I install the beta version and whats the difference between beta and dev
Beta is the npm dist‑tagbeta (may match latest).Dev is the moving head of
main (git); when published, it uses the npm dist‑tag dev.
One‑liners (macOS/Linux):
How long does install and onboarding usually take
Rough guide:- Install: 2-5 minutes
- Onboarding: 5-15 minutes depending on how many channels/models you configure
How do I try the latest bits
Two options:- Dev channel (git checkout):
main branch and updates from source.
- Hackable install (from the installer site):
Installer stuck How do I get more feedback
Re-run the installer with verbose output:Windows install says git not found or openclaw not recognized
Two common Windows issues: 1) npm error spawn git / git not found- Install Git for Windows and make sure
gitis on your PATH. - Close and reopen PowerShell, then re-run the installer.
- Your npm global bin folder is not on PATH.
- Check the path:
- Ensure
<prefix>\\binis on PATH (on most systems it is%AppData%\\npm). - Close and reopen PowerShell after updating PATH.
The docs didnt answer my question how do I get a better answer
Use the hackable (git) install so you have the full source and docs locally, then ask your bot (or Claude/Codex) from that folder so it can read the repo and answer precisely.How do I install OpenClaw on Linux
Short answer: follow the Linux guide, then run the onboarding wizard.- Linux quick path + service install: Linux.
- Full walkthrough: Getting Started.
- Installer + updates: Install & updates.
How do I install OpenClaw on a VPS
Any Linux VPS works. Install on the server, then use SSH/Tailscale to reach the Gateway. Guides: exe.dev, Hetzner, Fly.io.Remote access: Gateway remote.
Where are the cloudVPS install guides
We keep a hosting hub with the common providers. Pick one and follow the guide:- VPS hosting (all providers in one place)
- Fly.io
- Hetzner
- exe.dev
Can I ask OpenClaw to update itself
Short answer: possible, not recommended. The update flow can restart the Gateway (which drops the active session), may need a clean git checkout, and can prompt for confirmation. Safer: run updates from a shell as the operator. Use the CLI:What does the onboarding wizard actually do
openclaw onboard is the recommended setup path. In local mode it walks you through:
- Model/auth setup (Anthropic setup-token recommended for Claude subscriptions, OpenAI Codex OAuth supported, API keys optional, LM Studio local models supported)
- Workspace location + bootstrap files
- Gateway settings (bind/port/auth/tailscale)
- Providers (WhatsApp, Telegram, Discord, Mattermost (plugin), Signal, iMessage)
- Daemon install (LaunchAgent on macOS; systemd user unit on Linux/WSL2)
- Health checks and skills selection
Do I need a Claude or OpenAI subscription to run this
No. You can run OpenClaw with API keys (Anthropic/OpenAI/others) or with local‑only models so your data stays on your device. Subscriptions (Claude Pro/Max or OpenAI Codex) are optional ways to authenticate those providers. Docs: Anthropic, OpenAI, Local models, Models.Can I use Claude Max subscription without an API key
Yes. You can authenticate with a setup-token instead of an API key. This is the subscription path. Claude Pro/Max subscriptions do not include an API key, so this is the correct approach for subscription accounts. Important: you must verify with Anthropic that this usage is allowed under their subscription policy and terms. If you want the most explicit, supported path, use an Anthropic API key.How does Anthropic setuptoken auth work
claude setup-token generates a token string via the Claude Code CLI (it is not available in the web console). You can run it on any machine. Choose Anthropic token (paste setup-token) in the wizard or paste it with openclaw models auth paste-token --provider anthropic. The token is stored as an auth profile for the anthropic provider and used like an API key (no auto-refresh). More detail: OAuth.
Where do I find an Anthropic setuptoken
It is not in the Anthropic Console. The setup-token is generated by the Claude Code CLI on any machine:openclaw models auth setup-token --provider anthropic. If you ran claude setup-token elsewhere, paste it on the gateway host with openclaw models auth paste-token --provider anthropic. See Anthropic.
Do you support Claude subscription auth (Claude Pro/Max)
Yes — via setup-token. OpenClaw no longer reuses Claude Code CLI OAuth tokens; use a setup-token or an Anthropic API key. Generate the token anywhere and paste it on the gateway host. See Anthropic and OAuth. Note: Claude subscription access is governed by Anthropic’s terms. For production or multi‑user workloads, API keys are usually the safer choice.Why am I seeing HTTP 429 ratelimiterror from Anthropic
That means your Anthropic quota/rate limit is exhausted for the current window. If you use a Claude subscription (setup‑token or Claude Code OAuth), wait for the window to reset or upgrade your plan. If you use an Anthropic API key, check the Anthropic Console for usage/billing and raise limits as needed. Tip: set a fallback model so OpenClaw can keep replying while a provider is rate‑limited. See Models and OAuth.Is AWS Bedrock supported
Yes - via pi‑ai’s Amazon Bedrock (Converse) provider with manual config. You must supply AWS credentials/region on the gateway host and add a Bedrock provider entry in your models config. See Amazon Bedrock and Model providers. If you prefer a managed key flow, an OpenAI‑compatible proxy in front of Bedrock is still a valid option.How does Codex auth work
OpenClaw supports OpenAI Code (Codex) via OAuth (ChatGPT sign-in). The wizard can run the OAuth flow and will set the default model toopenai-codex/gpt-5.2 when appropriate. See Model providers and Wizard.
Do you support OpenAI subscription auth Codex OAuth
Yes. OpenClaw fully supports OpenAI Code (Codex) subscription OAuth. The onboarding wizard can run the OAuth flow for you. See OAuth, Model providers, and Wizard.How do I set up Gemini CLI OAuth
Gemini CLI uses a plugin auth flow, not a client id or secret inopenclaw.json.
Steps:
- Enable the plugin:
openclaw plugins enable google-gemini-cli-auth - Login:
openclaw models auth login --provider google-gemini-cli --set-default
Is a local model OK for casual chats
Usually no. OpenClaw needs large context + strong safety; small cards truncate and leak. If you must, run the largest MiniMax M2.1 build you can locally (LM Studio) and see /gateway/local-models. Smaller/quantized models increase prompt-injection risk - see Security.How do I keep hosted model traffic in a specific region
Pick region-pinned endpoints. OpenRouter exposes US-hosted options for MiniMax, Kimi, and GLM; choose the US-hosted variant to keep data in-region. You can still list Anthropic/OpenAI alongside these by usingmodels.mode: "merge" so fallbacks stay available while respecting the regioned provider you select.
Do I have to buy a Mac Mini to install this
No. OpenClaw runs on macOS or Linux (Windows via WSL2). A Mac mini is optional - some people buy one as an always‑on host, but a small VPS, home server, or Raspberry Pi‑class box works too. You only need a Mac for macOS‑only tools. For iMessage, you can keep the Gateway on Linux and runimsg on any Mac over SSH by pointing channels.imessage.cliPath at an SSH wrapper.
If you want other macOS‑only tools, run the Gateway on a Mac or pair a macOS node.
Docs: iMessage, Nodes, Mac remote mode.
Do I need a Mac mini for iMessage support
You need some macOS device signed into Messages. It does not have to be a Mac mini - any Mac works. OpenClaw’s iMessage integrations run on macOS (BlueBubbles orimsg), while
the Gateway can run elsewhere.
Common setups:
- Run the Gateway on Linux/VPS, and point
channels.imessage.cliPathat an SSH wrapper that runsimsgon the Mac. - Run everything on the Mac if you want the simplest single‑machine setup.
If I buy a Mac mini to run OpenClaw can I connect it to my MacBook Pro
Yes. The Mac mini can run the Gateway, and your MacBook Pro can connect as a node (companion device). Nodes don’t run the Gateway - they provide extra capabilities like screen/camera/canvas andsystem.run on that device.
Common pattern:
- Gateway on the Mac mini (always‑on).
- MacBook Pro runs the macOS app or a node host and pairs to the Gateway.
- Use
openclaw nodes status/openclaw nodes listto see it.
Can I use Bun
Bun is not recommended. We see runtime bugs, especially with WhatsApp and Telegram. Use Node for stable gateways. If you still want to experiment with Bun, do it on a non‑production gateway without WhatsApp/Telegram.Telegram what goes in allowFrom
channels.telegram.allowFrom is the human sender’s Telegram user ID (numeric, recommended) or @username. It is not the bot username.
Safer (no third-party bot):
- DM your bot, then run
openclaw logs --followand readfrom.id.
- DM your bot, then call
https://api.telegram.org/bot<bot_token>/getUpdatesand readmessage.from.id.
- DM
@userinfobotor@getidsbot.
Can multiple people use one WhatsApp number with different OpenClaw instances
Yes, via multi‑agent routing. Bind each sender’s WhatsApp DM (peerkind: "dm", sender E.164 like +15551234567) to a different agentId, so each person gets their own workspace and session store. Replies still come from the same WhatsApp account, and DM access control (channels.whatsapp.dmPolicy / channels.whatsapp.allowFrom) is global per WhatsApp account. See Multi-Agent Routing and WhatsApp.
Can I run a fast chat agent and an Opus for coding agent
Yes. Use multi‑agent routing: give each agent its own default model, then bind inbound routes (provider account or specific peers) to each agent. Example config lives in Multi-Agent Routing. See also Models and Configuration.Does Homebrew work on Linux
Yes. Homebrew supports Linux (Linuxbrew). Quick setup:/home/linuxbrew/.linuxbrew/bin (or your brew prefix) so brew-installed tools resolve in non‑login shells.
Recent builds also prepend common user bin dirs on Linux systemd services (for example ~/.local/bin, ~/.npm-global/bin, ~/.local/share/pnpm, ~/.bun/bin) and honor PNPM_HOME, NPM_CONFIG_PREFIX, BUN_INSTALL, VOLTA_HOME, ASDF_DATA_DIR, NVM_DIR, and FNM_DIR when set.
Whats the difference between the hackable git install and npm install
- Hackable (git) install: full source checkout, editable, best for contributors. You run builds locally and can patch code/docs.
- npm install: global CLI install, no repo, best for “just run it.” Updates come from npm dist‑tags.
Can I switch between npm and git installs later
Yes. Install the other flavor, then run Doctor so the gateway service points at the new entrypoint. This does not delete your data - it only changes the OpenClaw code install. Your state (~/.openclaw) and workspace (~/.openclaw/workspace) stay untouched.
From npm → git:
--repair in automation).
Backup tips: see Backup strategy.
Should I run the Gateway on my laptop or a VPS
Short answer: if you want 24/7 reliability, use a VPS. If you want the lowest friction and you’re okay with sleep/restarts, run it locally. Laptop (local Gateway)- Pros: no server cost, direct access to local files, live browser window.
- Cons: sleep/network drops = disconnects, OS updates/reboots interrupt, must stay awake.
- Pros: always‑on, stable network, no laptop sleep issues, easier to keep running.
- Cons: often run headless (use screenshots), remote file access only, you must SSH for updates.
How important is it to run OpenClaw on a dedicated machine
Not required, but recommended for reliability and isolation.- Dedicated host (VPS/Mac mini/Pi): always‑on, fewer sleep/reboot interruptions, cleaner permissions, easier to keep running.
- Shared laptop/desktop: totally fine for testing and active use, but expect pauses when the machine sleeps or updates.
What are the minimum VPS requirements and recommended OS
OpenClaw is lightweight. For a basic Gateway + one chat channel:- Absolute minimum: 1 vCPU, 1GB RAM, ~500MB disk.
- Recommended: 1-2 vCPU, 2GB RAM or more for headroom (logs, media, multiple channels). Node tools and browser automation can be resource hungry.
Can I run OpenClaw in a VM and what are the requirements
Yes. Treat a VM the same as a VPS: it needs to be always on, reachable, and have enough RAM for the Gateway and any channels you enable. Baseline guidance:- Absolute minimum: 1 vCPU, 1GB RAM.
- Recommended: 2GB RAM or more if you run multiple channels, browser automation, or media tools.
- OS: Ubuntu LTS or another modern Debian/Ubuntu.
What is OpenClaw?
What is OpenClaw in one paragraph
OpenClaw is a personal AI assistant you run on your own devices. It replies on the messaging surfaces you already use (WhatsApp, Telegram, Slack, Mattermost (plugin), Discord, Google Chat, Signal, iMessage, WebChat) and can also do voice + a live Canvas on supported platforms. The Gateway is the always-on control plane; the assistant is the product.Whats the value proposition
OpenClaw is not “just a Claude wrapper.” It’s a local-first control plane that lets you run a capable assistant on your own hardware, reachable from the chat apps you already use, with stateful sessions, memory, and tools - without handing control of your workflows to a hosted SaaS. Highlights:- Your devices, your data: run the Gateway wherever you want (Mac, Linux, VPS) and keep the workspace + session history local.
- Real channels, not a web sandbox: WhatsApp/Telegram/Slack/Discord/Signal/iMessage/etc, plus mobile voice and Canvas on supported platforms.
- Model-agnostic: use Anthropic, OpenAI, MiniMax, OpenRouter, etc., with per‑agent routing and failover.
- Local-only option: run local models so all data can stay on your device if you want.
- Multi-agent routing: separate agents per channel, account, or task, each with its own workspace and defaults.
- Open source and hackable: inspect, extend, and self-host without vendor lock‑in.
I just set it up what should I do first
Good first projects:- Build a website (WordPress, Shopify, or a simple static site).
- Prototype a mobile app (outline, screens, API plan).
- Organize files and folders (cleanup, naming, tagging).
- Connect Gmail and automate summaries or follow ups.
What are the top five everyday use cases for OpenClaw
Everyday wins usually look like:- Personal briefings: summaries of inbox, calendar, and news you care about.
- Research and drafting: quick research, summaries, and first drafts for emails or docs.
- Reminders and follow ups: cron or heartbeat driven nudges and checklists.
- Browser automation: filling forms, collecting data, and repeating web tasks.
- Cross device coordination: send a task from your phone, let the Gateway run it on a server, and get the result back in chat.
Can OpenClaw help with lead gen outreach ads and blogs for a SaaS
Yes for research, qualification, and drafting. It can scan sites, build shortlists, summarize prospects, and write outreach or ad copy drafts. For outreach or ad runs, keep a human in the loop. Avoid spam, follow local laws and platform policies, and review anything before it is sent. The safest pattern is to let OpenClaw draft and you approve. Docs: Security.What are the advantages vs Claude Code for web development
OpenClaw is a personal assistant and coordination layer, not an IDE replacement. Use Claude Code or Codex for the fastest direct coding loop inside a repo. Use OpenClaw when you want durable memory, cross-device access, and tool orchestration. Advantages:- Persistent memory + workspace across sessions
- Multi-platform access (WhatsApp, Telegram, TUI, WebChat)
- Tool orchestration (browser, files, scheduling, hooks)
- Always-on Gateway (run on a VPS, interact from anywhere)
- Nodes for local browser/screen/camera/exec
Skills and automation
How do I customize skills without keeping the repo dirty
Use managed overrides instead of editing the repo copy. Put your changes in~/.openclaw/skills/<name>/SKILL.md (or add a folder via skills.load.extraDirs in ~/.openclaw/openclaw.json). Precedence is <workspace>/skills > ~/.openclaw/skills > bundled, so managed overrides win without touching git. Only upstream-worthy edits should live in the repo and go out as PRs.
Can I load skills from a custom folder
Yes. Add extra directories viaskills.load.extraDirs in ~/.openclaw/openclaw.json (lowest precedence). Default precedence remains: <workspace>/skills → ~/.openclaw/skills → bundled → skills.load.extraDirs. clawdhub installs into ./skills by default, which OpenClaw treats as <workspace>/skills.
How can I use different models for different tasks
Today the supported patterns are:- Cron jobs: isolated jobs can set a
modeloverride per job. - Sub-agents: route tasks to separate agents with different default models.
- On-demand switch: use
/modelto switch the current session model at any time.
The bot freezes while doing heavy work How do I offload that
Use sub-agents for long or parallel tasks. Sub-agents run in their own session, return a summary, and keep your main chat responsive. Ask your bot to “spawn a sub-agent for this task” or use/subagents.
Use /status in chat to see what the Gateway is doing right now (and whether it is busy).
Token tip: long tasks and sub-agents both consume tokens. If cost is a concern, set a
cheaper model for sub-agents via agents.defaults.subagents.model.
Docs: Sub-agents.
Cron or reminders do not fire What should I check
Cron runs inside the Gateway process. If the Gateway is not running continuously, scheduled jobs will not run. Checklist:- Confirm cron is enabled (
cron.enabled) andOPENCLAW_SKIP_CRONis not set. - Check the Gateway is running 24/7 (no sleep/restarts).
- Verify timezone settings for the job (
--tzvs host timezone).
How do I install skills on Linux
Use ClawdHub (CLI) or drop skills into your workspace. The macOS Skills UI isn’t available on Linux. Browse skills at https://clawdhub.com. Install the ClawdHub CLI (pick one package manager):Can OpenClaw run tasks on a schedule or continuously in the background
Yes. Use the Gateway scheduler:- Cron jobs for scheduled or recurring tasks (persist across restarts).
- Heartbeat for “main session” periodic checks.
- Isolated jobs for autonomous agents that post summaries or deliver to chats.
metadata.openclaw.os plus required binaries, and skills only appear in the system prompt when they are eligible on the Gateway host. On Linux, darwin-only skills (like imsg, apple-notes, apple-reminders) will not load unless you override the gating.
You have three supported patterns:
Option A - run the Gateway on a Mac (simplest).Run the Gateway where the macOS binaries exist, then connect from Linux in remote mode or over Tailscale. The skills load normally because the Gateway host is macOS. Option B - use a macOS node (no SSH).
Run the Gateway on Linux, pair a macOS node (menubar app), and set Node Run Commands to “Always Ask” or “Always Allow” on the Mac. OpenClaw can treat macOS-only skills as eligible when the required binaries exist on the node. The agent runs those skills via the
nodes tool. If you choose “Always Ask”, approving “Always Allow” in the prompt adds that command to the allowlist.
Option C - proxy macOS binaries over SSH (advanced).Keep the Gateway on Linux, but make the required CLI binaries resolve to SSH wrappers that run on a Mac. Then override the skill to allow Linux so it stays eligible.
- Create an SSH wrapper for the binary (example:
imsg): - Put the wrapper on
PATHon the Linux host (for example~/bin/imsg). - Override the skill metadata (workspace or
~/.openclaw/skills) to allow Linux: - Start a new session so the skills snapshot refreshes.
channels.imessage.cliPath at an SSH wrapper (OpenClaw only needs stdio). See iMessage.
Do you have a Notion or HeyGen integration
Not built‑in today. Options:- Custom skill / plugin: best for reliable API access (Notion/HeyGen both have APIs).
- Browser automation: works without code but is slower and more fragile.
- One Notion page per client (context + preferences + active work).
- Ask the agent to fetch that page at the start of a session.
./skills under your current directory (or falls back to your configured OpenClaw workspace); OpenClaw treats that as <workspace>/skills on the next session. For shared skills across agents, place them in ~/.openclaw/skills/<name>/SKILL.md. Some skills expect binaries installed via Homebrew; on Linux that means Linuxbrew (see the Homebrew Linux FAQ entry above). See Skills and ClawdHub.
How do I install the Chrome extension for browser takeover
Use the built-in installer, then load the unpacked extension in Chrome:chrome://extensions → enable “Developer mode” → “Load unpacked” → pick that folder.
Full guide (including remote Gateway + security notes): Chrome extension
If the Gateway runs on the same machine as Chrome (default setup), you usually do not need anything extra.
If the Gateway runs elsewhere, run a node host on the browser machine so the Gateway can proxy browser actions.
You still need to click the extension button on the tab you want to control (it doesn’t auto-attach).
Sandboxing and memory
Is there a dedicated sandboxing doc
Yes. See Sandboxing. For Docker-specific setup (full gateway in Docker or sandbox images), see Docker. Can I keep DMs personal but make groups public sandboxed with one agent Yes - if your private traffic is DMs and your public traffic is groups. Useagents.defaults.sandbox.mode: "non-main" so group/channel sessions (non-main keys) run in Docker, while the main DM session stays on-host. Then restrict what tools are available in sandboxed sessions via tools.sandbox.tools.
Setup walkthrough + example config: Groups: personal DMs + public groups
Key config reference: Gateway configuration
How do I bind a host folder into the sandbox
Setagents.defaults.sandbox.docker.binds to ["host:path:mode"] (e.g., "/home/user/src:/src:ro"). Global + per-agent binds merge; per-agent binds are ignored when scope: "shared". Use :ro for anything sensitive and remember binds bypass the sandbox filesystem walls. See Sandboxing and Sandbox vs Tool Policy vs Elevated for examples and safety notes.
How does memory work
OpenClaw memory is just Markdown files in the agent workspace:- Daily notes in
memory/YYYY-MM-DD.md - Curated long-term notes in
MEMORY.md(main/private sessions only)
Memory keeps forgetting things How do I make it stick
Ask the bot to write the fact to memory. Long-term notes belong inMEMORY.md,
short-term context goes into memory/YYYY-MM-DD.md.
This is still an area we are improving. It helps to remind the model to store memories;
it will know what to do. If it keeps forgetting, verify the Gateway is using the same
workspace on every run.
Docs: Memory, Agent workspace.
Does semantic memory search require an OpenAI API key
Only if you use OpenAI embeddings. Codex OAuth covers chat/completions and does not grant embeddings access, so signing in with Codex (OAuth or the Codex CLI login) does not help for semantic memory search. OpenAI embeddings still need a real API key (OPENAI_API_KEY or models.providers.openai.apiKey).
If you don’t set a provider explicitly, OpenClaw auto-selects a provider when it
can resolve an API key (auth profiles, models.providers.*.apiKey, or env vars).
It prefers OpenAI if an OpenAI key resolves, otherwise Gemini if a Gemini key
resolves. If neither key is available, memory search stays disabled until you
configure it. If you have a local model path configured and present, OpenClaw
prefers local.
If you’d rather stay local, set memorySearch.provider = "local" (and optionally
memorySearch.fallback = "none"). If you want Gemini embeddings, set
memorySearch.provider = "gemini" and provide GEMINI_API_KEY (or
memorySearch.remote.apiKey). We support OpenAI, Gemini, or local embedding
models - see Memory for the setup details.
Does memory persist forever What are the limits
Memory files live on disk and persist until you delete them. The limit is your storage, not the model. The session context is still limited by the model context window, so long conversations can compact or truncate. That is why memory search exists - it pulls only the relevant parts back into context. Docs: Memory, Context.Where things live on disk
Is all data used with OpenClaw saved locally
No - OpenClaw’s state is local, but external services still see what you send them.- Local by default: sessions, memory files, config, and workspace live on the Gateway host
(
~/.openclaw+ your workspace directory). - Remote by necessity: messages you send to model providers (Anthropic/OpenAI/etc.) go to their APIs, and chat platforms (WhatsApp/Telegram/Slack/etc.) store message data on their servers.
- You control the footprint: using local models keeps prompts on your machine, but channel traffic still goes through the channel’s servers.
Where does OpenClaw store its data
Everything lives under$OPENCLAW_STATE_DIR (default: ~/.openclaw):
| Path | Purpose |
|---|---|
$OPENCLAW_STATE_DIR/openclaw.json | Main config (JSON5) |
$OPENCLAW_STATE_DIR/credentials/oauth.json | Legacy OAuth import (copied into auth profiles on first use) |
$OPENCLAW_STATE_DIR/agents/<agentId>/agent/auth-profiles.json | Auth profiles (OAuth + API keys) |
$OPENCLAW_STATE_DIR/agents/<agentId>/agent/auth.json | Runtime auth cache (managed automatically) |
$OPENCLAW_STATE_DIR/credentials/ | Provider state (e.g. whatsapp/<accountId>/creds.json) |
$OPENCLAW_STATE_DIR/agents/ | Per‑agent state (agentDir + sessions) |
$OPENCLAW_STATE_DIR/agents/<agentId>/sessions/ | Conversation history & state (per agent) |
$OPENCLAW_STATE_DIR/agents/<agentId>/sessions/sessions.json | Session metadata (per agent) |
~/.openclaw/agent/* (migrated by openclaw doctor).
Your workspace (AGENTS.md, memory files, skills, etc.) is separate and configured via agents.defaults.workspace (default: ~/.openclaw/workspace).
Where should AGENTSmd SOULmd USERmd MEMORYmd live
These files live in the agent workspace, not~/.openclaw.
- Workspace (per agent):
AGENTS.md,SOUL.md,IDENTITY.md,USER.md,MEMORY.md(ormemory.md),memory/YYYY-MM-DD.md, optionalHEARTBEAT.md. - State dir (
~/.openclaw): config, credentials, auth profiles, sessions, logs, and shared skills (~/.openclaw/skills).
~/.openclaw/workspace, configurable via:
Whats the recommended backup strategy
Put your agent workspace in a private git repo and back it up somewhere private (for example GitHub private). This captures memory + AGENTS/SOUL/USER files, and lets you restore the assistant’s “mind” later. Do not commit anything under~/.openclaw (credentials, sessions, tokens).
If you need a full restore, back up both the workspace and the state directory
separately (see the migration question above).
Docs: Agent workspace.
How do I completely uninstall OpenClaw
See the dedicated guide: Uninstall.Can agents work outside the workspace
Yes. The workspace is the default cwd and memory anchor, not a hard sandbox. Relative paths resolve inside the workspace, but absolute paths can access other host locations unless sandboxing is enabled. If you need isolation, useagents.defaults.sandbox or per‑agent sandbox settings. If you
want a repo to be the default working directory, point that agent’s
workspace to the repo root. The OpenClaw repo is just source code; keep the
workspace separate unless you intentionally want the agent to work inside it.
Example (repo as default cwd):
Im in remote mode where is the session store
Session state is owned by the gateway host. If you’re in remote mode, the session store you care about is on the remote machine, not your local laptop. See Session management.Config basics
What format is the config Where is it
OpenClaw reads an optional JSON5 config from$OPENCLAW_CONFIG_PATH (default: ~/.openclaw/openclaw.json):
~/.openclaw/workspace).
I set gatewaybind lan or tailnet and now nothing listens the UI says unauthorized
Non-loopback binds require auth. Configuregateway.auth.mode + gateway.auth.token (or use OPENCLAW_GATEWAY_TOKEN).
gateway.remote.tokenis for remote CLI calls only; it does not enable local gateway auth.- The Control UI authenticates via
connect.params.auth.token(stored in app/UI settings). Avoid putting tokens in URLs.
Why do I need a token on localhost now
The wizard generates a gateway token by default (even on loopback) so local WS clients must authenticate. This blocks other local processes from calling the Gateway. Paste the token into the Control UI settings (or your client config) to connect. If you really want open loopback, removegateway.auth from your config. Doctor can generate a token for you any time: openclaw doctor --generate-gateway-token.
Do I have to restart after changing config
The Gateway watches the config and supports hot‑reload:gateway.reload.mode: "hybrid"(default): hot‑apply safe changes, restart for critical oneshot,restart,offare also supported
How do I enable web search and web fetch
web_fetch works without an API key. web_search requires a Brave Search API
key. Recommended: run openclaw configure --section web to store it in
tools.web.search.apiKey. Environment alternative: set BRAVE_API_KEY for the
Gateway process.
- If you use allowlists, add
web_search/web_fetchorgroup:web. web_fetchis enabled by default (unless explicitly disabled).- Daemons read env vars from
~/.openclaw/.env(or the service environment).
How do I run a central Gateway with specialized workers across devices
The common pattern is one Gateway (e.g. Raspberry Pi) plus nodes and agents:- Gateway (central): owns channels (Signal/WhatsApp), routing, and sessions.
- Nodes (devices): Macs/iOS/Android connect as peripherals and expose local tools (
system.run,canvas,camera). - Agents (workers): separate brains/workspaces for special roles (e.g. “Hetzner ops”, “Personal data”).
- Sub‑agents: spawn background work from a main agent when you want parallelism.
- TUI: connect to the Gateway and switch agents/sessions.
Can the OpenClaw browser run headless
Yes. It’s a config option:false (headful). Headless is more likely to trigger anti‑bot checks on some sites. See Browser.
Headless uses the same Chromium engine and works for most automation (forms, clicks, scraping, logins). The main differences:
- No visible browser window (use screenshots if you need visuals).
- Some sites are stricter about automation in headless mode (CAPTCHAs, anti‑bot). For example, X/Twitter often blocks headless sessions.
How do I use Brave for browser control
Setbrowser.executablePath to your Brave binary (or any Chromium-based browser) and restart the Gateway.
See the full config examples in Browser.
Remote gateways + nodes
How do commands propagate between Telegram the gateway and nodes
Telegram messages are handled by the gateway. The gateway runs the agent and only then calls nodes over the Gateway WebSocket when a node tool is needed: Telegram → Gateway → Agent →node.* → Node → Gateway → Telegram
Nodes don’t see inbound provider traffic; they only receive node RPC calls.
How can my agent access my computer if the Gateway is hosted remotely
Short answer: pair your computer as a node. The Gateway runs elsewhere, but it can callnode.* tools (screen, camera, system) on your local machine over the Gateway WebSocket.
Typical setup:
- Run the Gateway on the always‑on host (VPS/home server).
- Put the Gateway host + your computer on the same tailnet.
- Ensure the Gateway WS is reachable (tailnet bind or SSH tunnel).
- Open the macOS app locally and connect in Remote over SSH mode (or direct tailnet) so it can register as a node.
- Approve the node on the Gateway:
system.run on that machine. Only
pair devices you trust, and review Security.
Docs: Nodes, Gateway protocol, macOS remote mode, Security.
Tailscale is connected but I get no replies What now
Check the basics:- Gateway is running:
openclaw gateway status - Gateway health:
openclaw status - Channel health:
openclaw channels status
- If you use Tailscale Serve, make sure
gateway.auth.allowTailscaleis set correctly. - If you connect via SSH tunnel, confirm the local tunnel is up and points at the right port.
- Confirm your allowlists (DM or group) include your account.
Can two OpenClaw instances talk to each other local VPS
Yes. There is no built-in “bot-to-bot” bridge, but you can wire it up in a few reliable ways: Simplest: use a normal chat channel both bots can access (Telegram/Slack/WhatsApp). Have Bot A send a message to Bot B, then let Bot B reply as usual. CLI bridge (generic): run a script that calls the other Gateway withopenclaw agent --message ... --deliver, targeting a chat where the other bot
listens. If one bot is on a remote VPS, point your CLI at that remote Gateway
via SSH/Tailscale (see Remote access).
Example pattern (run from a machine that can reach the target Gateway):
Do I need separate VPSes for multiple agents
No. One Gateway can host multiple agents, each with its own workspace, model defaults, and routing. That is the normal setup and it is much cheaper and simpler than running one VPS per agent. Use separate VPSes only when you need hard isolation (security boundaries) or very different configs that you do not want to share. Otherwise, keep one Gateway and use multiple agents or sub-agents.Is there a benefit to using a node on my personal laptop instead of SSH from a VPS
Yes - nodes are the first‑class way to reach your laptop from a remote Gateway, and they unlock more than shell access. The Gateway runs on macOS/Linux (Windows via WSL2) and is lightweight (a small VPS or Raspberry Pi-class box is fine; 4 GB RAM is plenty), so a common setup is an always‑on host plus your laptop as a node.- No inbound SSH required. Nodes connect out to the Gateway WebSocket and use device pairing.
- Safer execution controls.
system.runis gated by node allowlists/approvals on that laptop. - More device tools. Nodes expose
canvas,camera, andscreenin addition tosystem.run. - Local browser automation. Keep the Gateway on a VPS, but run Chrome locally and relay control with the Chrome extension + a node host on the laptop.
Should I install on a second laptop or just add a node
If you only need local tools (screen/camera/exec) on the second laptop, add it as a node. That keeps a single Gateway and avoids duplicated config. Local node tools are currently macOS-only, but we plan to extend them to other OSes. Install a second Gateway only when you need hard isolation or two fully separate bots. Docs: Nodes, Nodes CLI, Multiple gateways.Do nodes run a gateway service
No. Only one gateway should run per host unless you intentionally run isolated profiles (see Multiple gateways). Nodes are peripherals that connect to the gateway (iOS/Android nodes, or macOS “node mode” in the menubar app). For headless node hosts and CLI control, see Node host CLI. A full restart is required forgateway, discovery, and canvasHost changes.
Is there an API RPC way to apply config
Yes.config.apply validates + writes the full config and restarts the Gateway as part of the operation.
configapply wiped my config How do I recover and avoid this
config.apply replaces the entire config. If you send a partial object, everything
else is removed.
Recover:
- Restore from backup (git or a copied
~/.openclaw/openclaw.json). - If you have no backup, re-run
openclaw doctorand reconfigure channels/models. - If this was unexpected, file a bug and include your last known config or any backup.
- A local coding agent can often reconstruct a working config from logs or history.
- Use
openclaw config setfor small changes. - Use
openclaw configurefor interactive edits.
Whats a minimal sane config for a first install
How do I set up Tailscale on a VPS and connect from my Mac
Minimal steps:- Install + login on the VPS
- Install + login on your Mac
- Use the Tailscale app and sign in to the same tailnet.
- Enable MagicDNS (recommended)
- In the Tailscale admin console, enable MagicDNS so the VPS has a stable name.
- Use the tailnet hostname
- SSH:
ssh user@your-vps.tailnet-xxxx.ts.net - Gateway WS:
ws://your-vps.tailnet-xxxx.ts.net:18789
- SSH:
How do I connect a Mac node to a remote Gateway Tailscale Serve
Serve exposes the Gateway Control UI + WS. Nodes connect over the same Gateway WS endpoint. Recommended setup:- Make sure the VPS + Mac are on the same tailnet.
- Use the macOS app in Remote mode (SSH target can be the tailnet hostname). The app will tunnel the Gateway port and connect as a node.
- Approve the node on the gateway:
Env vars and .env loading
How does OpenClaw load environment variables
OpenClaw reads env vars from the parent process (shell, launchd/systemd, CI, etc.) and additionally loads:.envfrom the current working directory- a global fallback
.envfrom~/.openclaw/.env(aka$OPENCLAW_STATE_DIR/.env)
.env file overrides existing env vars.
You can also define inline env vars in config (applied only if missing from the process env):
I started the Gateway via the service and my env vars disappeared What now
Two common fixes:- Put the missing keys in
~/.openclaw/.envso they’re picked up even when the service doesn’t inherit your shell env. - Enable shell import (opt‑in convenience):
OPENCLAW_LOAD_SHELL_ENV=1, OPENCLAW_SHELL_ENV_TIMEOUT_MS=15000.
I set COPILOTGITHUBTOKEN but models status shows Shell env off Why
openclaw models status reports whether shell env import is enabled. “Shell env: off”
does not mean your env vars are missing - it just means OpenClaw won’t load
your login shell automatically.
If the Gateway runs as a service (launchd/systemd), it won’t inherit your shell
environment. Fix by doing one of these:
- Put the token in
~/.openclaw/.env: - Or enable shell import (
env.shellEnv.enabled: true). - Or add it to your config
envblock (applies only if missing).
COPILOT_GITHUB_TOKEN (also GH_TOKEN / GITHUB_TOKEN).
See /concepts/model-providers and /environment.
Sessions & multiple chats
How do I start a fresh conversation
Send/new or /reset as a standalone message. See Session management.
Do sessions reset automatically if I never send new
Yes. Sessions expire aftersession.idleMinutes (default 60). The next
message starts a fresh session id for that chat key. This does not delete
transcripts - it just starts a new session.
Is there a way to make a team of OpenClaw instances one CEO and many agents
Yes, via multi-agent routing and sub-agents. You can create one coordinator agent and several worker agents with their own workspaces and models. That said, this is best seen as a fun experiment. It is token heavy and often less efficient than using one bot with separate sessions. The typical model we envision is one bot you talk to, with different sessions for parallel work. That bot can also spawn sub-agents when needed. Docs: Multi-agent routing, Sub-agents, Agents CLI.Why did context get truncated midtask How do I prevent it
Session context is limited by the model window. Long chats, large tool outputs, or many files can trigger compaction or truncation. What helps:- Ask the bot to summarize the current state and write it to a file.
- Use
/compactbefore long tasks, and/newwhen switching topics. - Keep important context in the workspace and ask the bot to read it back.
- Use sub-agents for long or parallel work so the main chat stays smaller.
- Pick a model with a larger context window if this happens often.
How do I completely reset OpenClaw but keep it installed
Use the reset command:- The onboarding wizard also offers Reset if it sees an existing config. See Wizard.
- If you used profiles (
--profile/OPENCLAW_PROFILE), reset each state dir (defaults are~/.openclaw-<profile>). - Dev reset:
openclaw gateway --dev --reset(dev-only; wipes dev config + credentials + sessions + workspace).
Im getting context too large errors how do I reset or compact
Use one of these:-
Compact (keeps the conversation but summarizes older turns):
or
/compact <instructions>to guide the summary. -
Reset (fresh session ID for the same chat key):
- Enable or tune session pruning (
agents.defaults.contextPruning) to trim old tool output. - Use a model with a larger context window.
Why am I seeing LLM request rejected messagesNcontentXtooluseinput Field required
This is a provider validation error: the model emitted atool_use block without the required
input. It usually means the session history is stale or corrupted (often after long threads
or a tool/schema change).
Fix: start a fresh session with /new (standalone message).
Why am I getting heartbeat messages every 30 minutes
Heartbeats run every 30m by default. Tune or disable them:HEARTBEAT.md exists but is effectively empty (only blank lines and markdown
headers like # Heading), OpenClaw skips the heartbeat run to save API calls.
If the file is missing, the heartbeat still runs and the model decides what to do.
Per-agent overrides use agents.list[].heartbeat. Docs: Heartbeat.
Do I need to add a bot account to a WhatsApp group
No. OpenClaw runs on your own account, so if you’re in the group, OpenClaw can see it. By default, group replies are blocked until you allow senders (groupPolicy: "allowlist").
If you want only you to be able to trigger group replies:
How do I get the JID of a WhatsApp group
Option 1 (fastest): tail logs and send a test message in the group:chatId (or from) ending in @g.us, like:
1234567890-1234567890@g.us.
Option 2 (if already configured/allowlisted): list groups from config:
Why doesnt OpenClaw reply in a group
Two common causes:- Mention gating is on (default). You must @mention the bot (or match
mentionPatterns). - You configured
channels.whatsapp.groupswithout"*"and the group isn’t allowlisted.
Do groupsthreads share context with DMs
Direct chats collapse to the main session by default. Groups/channels have their own session keys, and Telegram topics / Discord threads are separate sessions. See Groups and Group messages.How many workspaces and agents can I create
No hard limits. Dozens (even hundreds) are fine, but watch for:- Disk growth: sessions + transcripts live under
~/.openclaw/agents/<agentId>/sessions/. - Token cost: more agents means more concurrent model usage.
- Ops overhead: per-agent auth profiles, workspaces, and channel routing.
- Keep one active workspace per agent (
agents.defaults.workspace). - Prune old sessions (delete JSONL or store entries) if disk grows.
- Use
openclaw doctorto spot stray workspaces and profile mismatches.
Can I run multiple bots or chats at the same time Slack and how should I set that up
Yes. Use Multi‑Agent Routing to run multiple isolated agents and route inbound messages by channel/account/peer. Slack is supported as a channel and can be bound to specific agents. Browser access is powerful but not “do anything a human can” - anti‑bot, CAPTCHAs, and MFA can still block automation. For the most reliable browser control, use the Chrome extension relay on the machine that runs the browser (and keep the Gateway anywhere). Best‑practice setup:- Always‑on Gateway host (VPS/Mac mini).
- One agent per role (bindings).
- Slack channel(s) bound to those agents.
- Local browser via extension relay (or a node) when needed.
Models: defaults, selection, aliases, switching
What is the default model
OpenClaw’s default model is whatever you set as:provider/model (example: anthropic/claude-opus-4-5). If you omit the provider, OpenClaw currently assumes anthropic as a temporary deprecation fallback - but you should still explicitly set provider/model.
What model do you recommend
Recommended default:anthropic/claude-opus-4-5.Good alternative:
anthropic/claude-sonnet-4-5.Reliable (less character):
openai/gpt-5.2 - nearly as good as Opus, just less personality.Budget:
zai/glm-4.7.
MiniMax M2.1 has its own docs: MiniMax and
Local models.
Rule of thumb: use the best model you can afford for high-stakes work, and a cheaper
model for routine chat or summaries. You can route models per agent and use sub-agents to
parallelize long tasks (each sub-agent consumes tokens). See Models and
Sub-agents.
Strong warning: weaker/over-quantized models are more vulnerable to prompt
injection and unsafe behavior. See Security.
More context: Models.
Can I use selfhosted models llamacpp vLLM Ollama
Yes. If your local server exposes an OpenAI-compatible API, you can point a custom provider at it. Ollama is supported directly and is the easiest path. Security note: smaller or heavily quantized models are more vulnerable to prompt injection. We strongly recommend large models for any bot that can use tools. If you still want small models, enable sandboxing and strict tool allowlists. Docs: Ollama, Local models, Model providers, Security, Sandboxing.How do I switch models without wiping my config
Use model commands or edit only the model fields. Avoid full config replaces. Safe options:/modelin chat (quick, per-session)openclaw models set ...(updates just model config)openclaw configure --section models(interactive)- edit
agents.defaults.modelin~/.openclaw/openclaw.json
config.apply with a partial object unless you intend to replace the whole config.
If you did overwrite config, restore from backup or re-run openclaw doctor to repair.
Docs: Models, Configure, Config, Doctor.
What do OpenClaw, Flawd, and Krill use for models
- OpenClaw + Flawd: Anthropic Opus (
anthropic/claude-opus-4-5) - see Anthropic. - Krill: MiniMax M2.1 (
minimax/MiniMax-M2.1) - see MiniMax.
How do I switch models on the fly without restarting
Use the/model command as a standalone message:
/model, /model list, or /model status.
/model (and /model list) shows a compact, numbered picker. Select by number:
/model status shows which agent is active, which auth-profiles.json file is being used, and which auth profile will be tried next.
It also shows the configured provider endpoint (baseUrl) and API mode (api) when available.
How do I unpin a profile I set with profile
Re-run /model without the @profile suffix:
/model (or send /model <default provider/model>).
Use /model status to confirm which auth profile is active.
Can I use GPT 5.2 for daily tasks and Codex 5.2 for coding
Yes. Set one as default and switch as needed:- Quick switch (per session):
/model gpt-5.2for daily tasks,/model gpt-5.2-codexfor coding. - Default + switch: set
agents.defaults.model.primarytoopenai-codex/gpt-5.2, then switch toopenai-codex/gpt-5.2-codexwhen coding (or the other way around). - Sub-agents: route coding tasks to sub-agents with a different default model.
Why do I see Model is not allowed and then no reply
Ifagents.defaults.models is set, it becomes the allowlist for /model and any
session overrides. Choosing a model that isn’t in that list returns:
agents.defaults.models, remove the allowlist, or pick a model from /model list.
Why do I see Unknown model minimaxMiniMaxM21
This means the provider isn’t configured (no MiniMax provider config or auth profile was found), so the model can’t be resolved. A fix for this detection is in 2026.1.12 (unreleased at the time of writing). Fix checklist:- Upgrade to 2026.1.12 (or run from source
main), then restart the gateway. - Make sure MiniMax is configured (wizard or JSON), or that a MiniMax API key exists in env/auth profiles so the provider can be injected.
- Use the exact model id (case‑sensitive):
minimax/MiniMax-M2.1orminimax/MiniMax-M2.1-lightning. - Run:
and pick from the list (or
/model listin chat).
Can I use MiniMax as my default and OpenAI for complex tasks
Yes. Use MiniMax as the default and switch models per session when needed. Fallbacks are for errors, not “hard tasks,” so use/model or a separate agent.
Option A: switch per session
- Agent A 默认:MiniMax
- Agent B 默认:OpenAI
- 按 agent 路由或使用
/agent切换
Are opus sonnet gpt builtin shortcuts
是的。OpenClaw 内置了一些默认快捷别名(仅在模型存在于agents.defaults.models 时生效):
opus→anthropic/claude-opus-4-5sonnet→anthropic/claude-sonnet-4-5gpt→openai/gpt-5.2gpt-mini→openai/gpt-5-minigemini→google/gemini-3-pro-previewgemini-flash→google/gemini-3-flash-preview
How do I defineoverride model shortcuts aliases
别名来自agents.defaults.models.<modelId>.alias。例如:
/model sonnet(或在支持时使用 /<alias>)会解析为该模型 ID。
How do I add models from other providers like OpenRouter or ZAI
OpenRouter(按 token 计费;模型众多):No API key found for provider "zai")。
No API key found for provider after adding a new agent
这通常意味着新 agent的 auth 存储为空。Auth 是按 agent 分隔的,存储在:
- 运行
openclaw agents add <id>并在向导中配置 auth。 - 或从主 agent 的
agentDir复制auth-profiles.json到新 agent 的agentDir。
agentDir;这会导致 auth/session 冲突。
Model failover and “All models failed”
How does failover work
Failover 分两步进行:- 在同一 provider 内进行 Auth profile 轮换。
- 在
agents.defaults.model.fallbacks中 切换到下一个模型。
What does this error mean
anthropic:default,但在预期的 auth 存储中找不到凭据。
Fix checklist for No credentials found for profile anthropicdefault
- 确认 auth profiles 的位置(新路径 vs 旧路径)
- 当前:
~/.openclaw/agents/<agentId>/agent/auth-profiles.json - 旧路径:
~/.openclaw/agent/*(由openclaw doctor迁移)
- 当前:
- 确认你的环境变量被 Gateway 加载
- 如果你在 shell 里设置了
ANTHROPIC_API_KEY,但通过 systemd/launchd 运行 Gateway,它可能不会继承。把它放进~/.openclaw/.env或启用env.shellEnv。
- 如果你在 shell 里设置了
- 确保你在编辑正确的 agent
- 多 agent 配置意味着会有多个
auth-profiles.json文件。
- 多 agent 配置意味着会有多个
- 快速检查模型/auth 状态
- 使用
openclaw models status查看已配置模型以及 providers 的认证状态。
- 使用
- 使用 setup-token
- 运行
claude setup-token,然后用openclaw models auth setup-token --provider anthropic粘贴。 - 如果 token 是在另一台机器上创建的,使用
openclaw models auth paste-token --provider anthropic。
- 运行
- 如果你想改用 API key
- 在Gateway 主机的
~/.openclaw/.env中设置ANTHROPIC_API_KEY。 - 清除任何强制缺失 profile 的固定顺序:
- 在Gateway 主机的
- 确认你在 Gateway 主机上运行命令
- 远程模式下,auth profiles 位于 Gateway 机器上,而不是你的笔记本。
Why did it also try Google Gemini and fail
如果你的模型配置包含 Google Gemini 作为 fallback(或你切换到了 Gemini 快捷名),OpenClaw 会在 fallback 期间尝试它。如果你没有配置 Google 凭据,就会看到No API key found for provider "google"。
修复:提供 Google auth,或从 agents.defaults.model.fallbacks / aliases 中移除或避免 Google 模型,防止 fallback 路由过去。
LLM request rejected message thinking signature required google antigravity
原因:会话历史包含没有签名的 thinking 块(通常来自被中止/部分流式的输出)。Google Antigravity 需要 thinking 块带签名。
修复:OpenClaw 现在会为 Google Antigravity Claude 清理未签名的 thinking 块。如果仍出现,开启新会话或为该 agent 设置 /thinking off。
Auth profiles: what they are and how to manage them
相关:/concepts/oauth(OAuth 流程、token 存储、多账号模式)What is an auth profile
Auth profile 是一个按 provider 绑定的命名凭据记录(OAuth 或 API key)。Profiles 存放在:What are typical profile IDs
OpenClaw 使用带 provider 前缀的 ID,例如:anthropic:default(没有 email 身份时常见)anthropic:<email>用于 OAuth 身份- 你自定义的 ID(例如
anthropic:work)
Can I control which auth profile is tried first
可以。配置支持为 profile 添加可选元数据,并为每个 provider 设置顺序(auth.order.<provider>)。它不存储 secrets;只是在 IDs 与 provider/mode 之间做映射并设置轮换顺序。
OpenClaw 可能会临时跳过处于短期冷却(限流/超时/auth 失败)或长期禁用(账单/额度不足)状态的 profile。要检查这些,运行 openclaw models status --json 并查看 auth.unusableProfiles。调优参数:auth.cooldowns.billingBackoffHours*。
你还可以通过 CLI 设置按 agent 覆盖的顺序(存储在该 agent 的 auth-profiles.json 中):
OAuth vs API key whats the difference
OpenClaw 同时支持两者:- OAuth 通常可利用订阅访问(如适用)。
- API keys 使用按 token 计费。
Gateway: ports, “already running”, and remote mode
What port does the Gateway use
gateway.port 控制 WebSocket + HTTP 的单一复用端口(Control UI、hooks 等)。
优先级:
Why does openclaw gateway status say Runtime running but RPC probe failed
因为“running”是监督进程(launchd/systemd/schtasks)的视角。RPC probe 是 CLI 实际连接 Gateway WebSocket 并调用status。
运行 openclaw gateway status,重点看这些行:
Probe target:(probe 实际使用的 URL)Listening:(端口真正绑定在哪里)Last gateway error:(进程活着但端口不监听时的常见根因)
Why does openclaw gateway status show Config cli and Config service different
你在编辑一个 config,但服务运行的是另一个(常见是--profile / OPENCLAW_STATE_DIR 不匹配)。
修复:
--profile / 环境运行该命令。
What does another gateway instance is already listening mean
OpenClaw 通过在启动时立刻绑定 WebSocket 监听器来实现运行锁(默认ws://127.0.0.1:18789)。如果绑定失败并返回 EADDRINUSE,会抛出 GatewayLockError,表示已有实例在监听。
修复:停止其他实例、释放端口,或改用 openclaw gateway --port <port>。
How do I run OpenClaw in remote mode client connects to a Gateway elsewhere
设置gateway.mode: "remote" 并指向远程 WebSocket URL,可选 token/password:
openclaw gateway只会在gateway.mode为local时启动(或你传了 override flag)。- macOS app 会监控配置文件,值变化时会实时切换模式。
The Control UI says unauthorized or keeps reconnecting What now
你的 Gateway 启用了 auth(gateway.auth.*),但 UI 没有发送匹配的 token/password。
事实(来自代码):
- Control UI 把 token 存在浏览器 localStorage 的
openclaw.control.settings.v1。 - UI 能一次性导入
?token=...(和/或?password=...),然后会从 URL 中移除。
- 最快:
openclaw dashboard(打印并复制带 token 的链接,尝试打开;无头环境会提示 SSH)。 - 如果还没有 token:
openclaw doctor --generate-gateway-token。 - 远程时先打隧道:
ssh -N -L 18789:127.0.0.1:18789 user@host,然后打开http://127.0.0.1:18789/?token=...。 - 在 Gateway 主机上设置
gateway.auth.token(或OPENCLAW_GATEWAY_TOKEN)。 - 在 Control UI 设置中粘贴同一个 token(或用一次性
?token=...链接刷新)。 - 仍未解决?运行
openclaw status --all并查看 Troubleshooting。Dashboard 有 auth 细节。
I set gatewaybind tailnet but it cant bind nothing listens
tailnet 绑定会从网卡里选择一个 Tailscale IP(100.64.0.0/10)。如果机器不在 Tailscale 上(或接口未启用),就没有可绑定的地址。
修复:
- 在该主机上启动 Tailscale(使其获得 100.x 地址),或
- 切换为
gateway.bind: "loopback"/"lan"。
tailnet 是显式指定的;auto 更偏向 loopback。需要仅 tailnet 绑定时使用 gateway.bind: "tailnet"。
Can I run multiple Gateways on the same host
通常不需要:一个 Gateway 就能跑多个消息通道和 agents。只有在需要冗余(例如救援 bot)或强隔离时,才建议多个 Gateway。 可以,但必须隔离:OPENCLAW_CONFIG_PATH(每实例 config)OPENCLAW_STATE_DIR(每实例 state)agents.defaults.workspace(workspace 隔离)gateway.port(唯一端口)
- 每实例使用
openclaw --profile <name> …(自动创建~/.openclaw-<name>)。 - 在每个 profile 的 config 中设置唯一
gateway.port(或手动运行时传--port)。 - 安装每 profile 的服务:
openclaw --profile <name> gateway install。
bot.molt.<profile>;旧的 com.openclaw.*、openclaw-gateway-<profile>.service、OpenClaw Gateway (<profile>))。
完整指南:Multiple gateways。
What does invalid handshake code 1008 mean
Gateway 是一个 WebSocket server,它期望第一条消息是connect 帧。如果收到其它内容,就会以 code 1008(策略违规)关闭连接。
常见原因:
- 你在浏览器中打开了 HTTP URL(
http://...),而不是 WS 客户端。 - 使用了错误的端口或路径。
- 代理/隧道剥离了 auth headers,或发送了非 Gateway 的请求。
- 使用 WS URL:
ws://<host>:18789(或 HTTPS 时用wss://...)。 - 不要用普通浏览器标签页打开 WS 端口。
- 若开启了 auth,在
connect帧中带上 token/password。
Logging and debugging
Where are logs
文件日志(结构化):logging.file 设置稳定路径。文件日志级别由 logging.level 控制。控制台详细度由 --verbose 和 logging.consoleLevel 控制。
最快的日志跟随:
- macOS:
$OPENCLAW_STATE_DIR/logs/gateway.log和gateway.err.log(默认~/.openclaw/logs/...;profiles 使用~/.openclaw-<profile>/logs/...) - Linux:
journalctl --user -u openclaw-gateway[-<profile>].service -n 200 --no-pager - Windows:
schtasks /Query /TN "OpenClaw Gateway (<profile>)" /V /FO LIST
How do I startstoprestart the Gateway service
使用 gateway helpers:openclaw gateway --force 可以抢占端口。见 Gateway。
I closed my terminal on Windows how do I restart OpenClaw
Windows 有两种安装模式: **1) WSL2(推荐):**Gateway 运行在 Linux 内。 打开 PowerShell,进入 WSL,然后重启:The Gateway is up but replies never arrive What should I check
先做一轮快速健康检查:- Gateway 主机上未加载模型 auth(看
models status)。 - 频道的 pairing/allowlist 阻止了回复(检查频道配置 + 日志)。
- WebChat/Dashboard 没有正确 token。
Disconnected from gateway no reason what now
这通常意味着 UI 丢了 WebSocket 连接。检查:- Gateway 在运行吗?
openclaw gateway status - Gateway 健康吗?
openclaw status - UI 有没有正确 token?
openclaw dashboard - 如果是远程,隧道/Tailscale 连接是否正常?
Telegram setMyCommands fails with network errors What should I check
先看日志和频道状态:TUI shows no output What should I check
先确认 Gateway 可达且 agent 能运行:/status 查看当前状态。如果你期待在聊天频道收到回复,确保开启投递(/deliver on)。
Docs: TUI, Slash commands。
How do I completely stop then start the Gateway
如果你安装了服务:ELI5 openclaw gateway restart vs openclaw gateway
openclaw gateway restart:重启后台服务(launchd/systemd)。openclaw gateway:在本终端前台运行 gateway。
openclaw gateway。
Whats the fastest way to get more details when something fails
用--verbose 启动 Gateway 以获得更多控制台信息,然后查看日志文件以排查频道 auth、模型路由和 RPC 错误。
Media & attachments
My skill generated an imagePDF but nothing was sent
Agent 发送附件时,必须在消息里包含一行MEDIA:<path-or-url>(单独一行)。见 OpenClaw assistant setup 和 Agent send。
CLI 发送:
- 目标频道支持外发媒体,且未被 allowlist 阻止。
- 文件大小在该 provider 的限制内(图片会缩放到最大 2048px)。
Security and access control
Is it safe to expose OpenClaw to inbound DMs
把入站 DMs 视为不可信输入。默认设置旨在降低风险:- 支持 DMs 的频道默认是 pairing:
- 未知发送者会收到 pairing 码;bot 不处理其消息。
- 通过
openclaw pairing approve <channel> <code>批准。 - 待处理请求每个频道最多 3 个;若没收到 code,检查
openclaw pairing list <channel>。
- 公开开放 DMs 需要显式 opt‑in(
dmPolicy: "open"且 allowlist 为"*")。
openclaw doctor 可提示高风险 DM 策略。
Is prompt injection only a concern for public bots
不是。Prompt injection 针对的是不可信内容,不是仅仅谁能 DM 机器人。 如果你的助手会读取外部内容(web search/fetch、浏览器页面、邮件、文档、附件、粘贴的日志),这些内容可能包含试图劫持模型的指令。即便只有你是发送者,也可能发生。 最大风险出现在启用了工具时:模型可能被诱导去泄露上下文或替你调用工具。降低影响半径的方法:- 用只读或禁工具的“reader” agent 来总结不可信内容
- 对启用工具的 agent 关闭
web_search/web_fetch/browser - 开启 sandbox 并使用严格的工具 allowlist
Should my bot have its own email GitHub account or phone number
对大多数配置来说,应该有独立账号。把 bot 隔离为单独账号/手机号能降低出问题时的影响范围,也更便于轮换凭据或撤销权限,而不影响你的个人账号。 先从小范围开始,只授予真正需要的工具和账号,后续再扩展。 Docs: Security, Pairing。Can I give it autonomy over my text messages and is that safe
我们不建议让它对你的个人消息完全自治。更安全的模式是:- 保持 DMs 在 pairing 模式或严格 allowlist。
- 如果要让它代发消息,使用独立号码或账号。
- 让它先拟稿,然后批准后发送。
Can I use cheaper models for personal assistant tasks
可以,前提是该 agent 仅聊天且输入可信。更小的模型更容易被指令劫持,所以不适合启用工具的 agent,或读取不可信内容时使用。 如果必须用小模型,务必锁紧工具并在 sandbox 中运行。见 Security。I ran start in Telegram but didnt get a pairing code
只有未知发送者发消息且dmPolicy: "pairing" 启用时才会发送 pairing 码。单独的 /start 不会生成 code。
查看待处理请求:
dmPolicy: "open"。
WhatsApp will it message my contacts How does pairing work
不会。WhatsApp 默认 DM 策略是 pairing。未知发送者只会收到 pairing 码,其消息不会被处理。OpenClaw 只会回复它收到的聊天消息,或你显式触发的发送。 批准 pairing:channels.whatsapp.selfChatMode。
Chat commands, aborting tasks, and “it won’t stop”
How do I stop internal system messages from showing in chat
多数内部或工具消息只会在该会话启用 verbose 或 reasoning 时显示。 在出现的聊天中这样修复:verboseDefault 为 on 的 bot profile。
Docs: Thinking and verbose, Security。
How do I stopcancel a running task
发送以下任一独立消息(不带斜杠):/ 开头并作为单独消息发送,但少数快捷指令(如 /status)在允许的发送者下也可内联。
How do I send a Discord message from Telegram Crosscontext messaging denied
OpenClaw 默认阻止跨 provider消息。如果工具调用绑定在 Telegram,就不会发送到 Discord,除非你明确允许。 为 agent 启用跨 provider 消息:agents.list[].tools.message 下。
Why does it feel like the bot ignores rapidfire messages
队列模式决定新消息如何与进行中的运行交互。用/queue 切换模式:
steer- 新消息会重定向当前任务followup- 逐条处理collect- 批量收集后一次回复(默认)steer-backlog- 先 steer,再处理积压interrupt- 中止当前运行并重新开始
debounce:2s cap:25 drop:summarize。
Answer the exact question from the screenshot/chat log
Q: “What’s the default model for Anthropic with an API key?” A: 在 OpenClaw 中,凭据和模型选择是分开的。设置ANTHROPIC_API_KEY(或在 auth profiles 中存储 Anthropic API key)只会启用认证;真正的默认模型由你在 agents.defaults.model.primary 中配置(例如 anthropic/claude-sonnet-4-5 或 anthropic/claude-opus-4-5)。如果你看到 No credentials found for profile "anthropic:default",说明 Gateway 在运行该 agent 的预期 auth-profiles.json 中找不到 Anthropic 凭据。
还卡住?去 Discord 或开一个 GitHub discussion。