跳转到主要内容

常见问题(FAQ)

快速答案 + 更深入的实战排障(本地开发、VPS、多 agent、OAuth/API keys、模型 failover)。运行时诊断请看 Troubleshooting。完整配置参考见 Configuration

目录

First 60 seconds if something’s broken

  1. 快速状态(首个检查)
    openclaw status
    
    本地快速摘要:OS + 更新、gateway/服务可达性、agents/sessions、provider 配置 + 运行时问题(当 gateway 可达时)。
  2. 可粘贴报告(安全分享)
    openclaw status --all
    
    只读诊断 + 日志尾(token 已脱敏)。
  3. 守护进程 + 端口状态
    openclaw gateway status
    
    显示 supervisor 运行状态 vs RPC 可达性、探测目标 URL,以及服务可能使用的配置。
  4. 深入探测
    openclaw status --deep
    
    运行 gateway 健康检查 + provider 探测(需要可达的 gateway)。见 Health
  5. 跟随最新日志
    openclaw logs --follow
    
    如果 RPC 不可用,回退到:
    tail -f "$(ls -t /tmp/openclaw/openclaw-*.log | head -1)"
    
    文件日志与服务日志是分开的;见 LoggingTroubleshooting
  6. 运行 doctor(修复)
    openclaw doctor
    
    修复/迁移配置与状态 + 运行健康检查。见 Doctor
  7. Gateway 快照
    openclaw health --json
    openclaw health --verbose   # shows the target URL + config path on errors
    
    向运行中的 gateway 请求完整快照(仅 WS)。见 Health

Quick start and first-run setup

Im stuck whats the fastest way to get unstuck

使用一个能 看到你机器 的本地 AI agent。这比在 Discord 里求助更有效,因为多数“卡住”都源于 本地配置或环境问题,远程协助者无法直接检查。 这些工具可以读取 repo、运行命令、检查日志,并帮你修复机器级设置(PATH、服务、权限、认证文件)。请用 hackable(git)安装提供 完整源码 checkout
curl -fsSL https://openclaw.bot/install.sh | bash -s -- --install-method git
这样会 从 git checkout 安装 OpenClaw,agent 能读取代码 + 文档,并基于你正在运行的具体版本进行推理。你随时可以通过不带 --install-method git 重新运行安装器切回稳定版。 提示:让 agent 规划并监督 修复流程(逐步执行),然后只执行必要命令。这样变更更小、也更易审计。 如果你发现了真实 bug 或修复,请提 GitHub issue 或发 PR: https://github.com/openclaw/openclaw/issues https://github.com/openclaw/openclaw/pulls 先从这些命令开始(求助时分享输出):
openclaw status
openclaw models status
openclaw doctor
它们的作用:
  • openclaw status:gateway/agent 健康 + 基础配置的快速快照。
  • openclaw models status:检查 provider 认证 + 模型可用性。
  • openclaw doctor:验证并修复常见配置/状态问题。
其他有用的 CLI 检查:openclaw status --all, openclaw logs --follow, openclaw gateway status, openclaw health --verbose 快速调试循环:First 60 seconds if something’s broken。 安装文档:Install, Installer flags, Updating The repo recommends running from source and using the onboarding wizard:
curl -fsSL https://openclaw.bot/install.sh | bash
openclaw onboard --install-daemon
The wizard can also build UI assets automatically. After onboarding, you typically run the Gateway on port 18789. From source (contributors/dev):
git clone https://github.com/openclaw/openclaw.git
cd openclaw
pnpm install
pnpm build
pnpm ui:build # auto-installs UI deps on first run
openclaw onboard
If you don’t have a global install yet, run it via pnpm openclaw onboard.

How do I open the dashboard after onboarding

The wizard now opens your browser with a tokenized dashboard URL right after onboarding and also prints the full link (with token) in the summary. Keep that tab open; if it didn’t launch, copy/paste the printed URL on the same machine. Tokens stay local to your host-nothing is fetched from the browser.

How do I authenticate the dashboard token on localhost vs remote

Localhost (same machine):
  • Open http://127.0.0.1:18789/.
  • If it asks for auth, run openclaw dashboard and use the tokenized link (?token=...).
  • The token is the same value as gateway.auth.token (or OPENCLAW_GATEWAY_TOKEN) and is stored by the UI after first load.
Not on localhost:
  • Tailscale Serve (recommended): keep bind loopback, run openclaw gateway --tailscale serve, open https://<magicdns>/. If gateway.auth.allowTailscale is true, identity headers satisfy auth (no token).
  • Tailnet bind: run openclaw gateway --bind tailnet --token "<token>", open http://<tailscale-ip>:18789/, paste token in dashboard settings.
  • SSH tunnel: ssh -N -L 18789:127.0.0.1:18789 user@host then open http://127.0.0.1:18789/?token=... from openclaw dashboard.
See Dashboard and Web surfaces for bind modes and auth details.

What runtime do I need

Node >= 22 is required. pnpm is recommended. Bun is not recommended for the Gateway.

Does it run on Raspberry Pi

Yes. The Gateway is lightweight - docs list 512MB-1GB RAM, 1 core, and about 500MB disk as enough for personal use, and note that a Raspberry Pi 4 can run it. If you want extra headroom (logs, media, other services), 2GB is recommended, but it’s not a hard minimum. Tip: a small Pi/VPS can host the Gateway, and you can pair nodes on your laptop/phone for local screen/camera/canvas or command execution. See Nodes.

Any tips for Raspberry Pi installs

Short version: it works, but expect rough edges.
  • Use a 64-bit OS and keep Node >= 22.
  • Prefer the hackable (git) install so you can see logs and update fast.
  • Start without channels/skills, then add them one by one.
  • If you hit weird binary issues, it is usually an ARM compatibility problem.
Docs: Linux, Install.

It is stuck on wake up my friend onboarding will not hatch What now

That screen depends on the Gateway being reachable and authenticated. The TUI also sends “Wake up, my friend!” automatically on first hatch. If you see that line with no reply and tokens stay at 0, the agent never ran.
  1. Restart the Gateway:
openclaw gateway restart
  1. Check status + auth:
openclaw status
openclaw models status
openclaw logs --follow
  1. If it still hangs, run:
openclaw doctor
If the Gateway is remote, ensure the tunnel/Tailscale connection is up and that the UI is pointed at the right Gateway. See Remote access.

Can I migrate my setup to a new machine Mac mini without redoing onboarding

Yes. Copy the state directory and workspace, then run Doctor once. This keeps your bot “exactly the same” (memory, session history, auth, and channel state) as long as you copy both locations:
  1. Install OpenClaw on the new machine.
  2. Copy $OPENCLAW_STATE_DIR (default: ~/.openclaw) from the old machine.
  3. Copy your workspace (default: ~/.openclaw/workspace).
  4. Run openclaw doctor and restart the Gateway service.
That preserves config, auth profiles, WhatsApp creds, sessions, and memory. If you’re in remote mode, remember the gateway host owns the session store and workspace. Important: if you only commit/push your workspace to GitHub, you’re backing up memory + bootstrap files, but not session history or auth. Those live under ~/.openclaw/ (for example ~/.openclaw/agents/<agentId>/sessions/). Related: Migrating, Where things live on disk, Agent workspace, Doctor, Remote mode.

Where do I see whats new in the latest version

Check the GitHub changelog:
https://github.com/openclaw/openclaw/blob/main/CHANGELOG.md
Newest entries are at the top. If the top section is marked Unreleased, the next dated section is the latest shipped version. Entries are grouped by Highlights, Changes, and Fixes (plus docs/other sections when needed).

I cant access docs.openclaw.ai SSL error What now

Some Comcast/Xfinity connections incorrectly block docs.openclaw.ai via Xfinity Advanced Security. Disable it or allowlist docs.openclaw.ai, then retry. More detail: Troubleshooting. Please help us unblock it by reporting here: https://spa.xfinity.com/check_url_status. If you still can’t reach the site, the docs are mirrored on GitHub: https://github.com/openclaw/openclaw/tree/main/docs

Whats the difference between stable and beta

Stable and beta are npm dist‑tags, not separate code lines:
  • latest = stable
  • beta = early build for testing
We ship builds to beta, test them, and once a build is solid we promote that same version to latest. That’s why beta and stable can point at the same version. See what changed:
https://github.com/openclaw/openclaw/blob/main/CHANGELOG.md

How do I install the beta version and whats the difference between beta and dev

Beta is the npm dist‑tag beta (may match latest).
Dev is the moving head of main (git); when published, it uses the npm dist‑tag dev.
One‑liners (macOS/Linux):
curl -fsSL --proto '=https' --tlsv1.2 https://openclaw.bot/install.sh | bash -s -- --beta
curl -fsSL --proto '=https' --tlsv1.2 https://openclaw.bot/install.sh | bash -s -- --install-method git
Windows installer (PowerShell): https://openclaw.ai/install.ps1 More detail: Development channels and Installer flags.

How long does install and onboarding usually take

Rough guide:
  • Install: 2-5 minutes
  • Onboarding: 5-15 minutes depending on how many channels/models you configure
If it hangs, use Installer stuck and the fast debug loop in Im stuck.

How do I try the latest bits

Two options:
  1. Dev channel (git checkout):
openclaw update --channel dev
This switches to the main branch and updates from source.
  1. Hackable install (from the installer site):
curl -fsSL https://openclaw.bot/install.sh | bash -s -- --install-method git
That gives you a local repo you can edit, then update via git. If you prefer a clean clone manually, use:
git clone https://github.com/openclaw/openclaw.git
cd openclaw
pnpm install
pnpm build
Docs: Update, Development channels, Install.

Installer stuck How do I get more feedback

Re-run the installer with verbose output:
curl -fsSL https://openclaw.bot/install.sh | bash -s -- --verbose
Beta install with verbose:
curl -fsSL https://openclaw.bot/install.sh | bash -s -- --beta --verbose
For a hackable (git) install:
curl -fsSL https://openclaw.bot/install.sh | bash -s -- --install-method git --verbose
More options: Installer flags.

Windows install says git not found or openclaw not recognized

Two common Windows issues: 1) npm error spawn git / git not found
  • Install Git for Windows and make sure git is on your PATH.
  • Close and reopen PowerShell, then re-run the installer.
2) openclaw is not recognized after install
  • Your npm global bin folder is not on PATH.
  • Check the path:
    npm config get prefix
    
  • Ensure <prefix>\\bin is on PATH (on most systems it is %AppData%\\npm).
  • Close and reopen PowerShell after updating PATH.
If you want the smoothest Windows setup, use WSL2 instead of native Windows. Docs: Windows.

The docs didnt answer my question how do I get a better answer

Use the hackable (git) install so you have the full source and docs locally, then ask your bot (or Claude/Codex) from that folder so it can read the repo and answer precisely.
curl -fsSL https://openclaw.bot/install.sh | bash -s -- --install-method git
More detail: Install and Installer flags.

How do I install OpenClaw on Linux

Short answer: follow the Linux guide, then run the onboarding wizard.

How do I install OpenClaw on a VPS

Any Linux VPS works. Install on the server, then use SSH/Tailscale to reach the Gateway. Guides: exe.dev, Hetzner, Fly.io.
Remote access: Gateway remote.

Where are the cloudVPS install guides

We keep a hosting hub with the common providers. Pick one and follow the guide: How it works in the cloud: the Gateway runs on the server, and you access it from your laptop/phone via the Control UI (or Tailscale/SSH). Your state + workspace live on the server, so treat the host as the source of truth and back it up. You can pair nodes (Mac/iOS/Android/headless) to that cloud Gateway to access local screen/camera/canvas or run commands on your laptop while keeping the Gateway in the cloud. Hub: Platforms. Remote access: Gateway remote. Nodes: Nodes, Nodes CLI.

Can I ask OpenClaw to update itself

Short answer: possible, not recommended. The update flow can restart the Gateway (which drops the active session), may need a clean git checkout, and can prompt for confirmation. Safer: run updates from a shell as the operator. Use the CLI:
openclaw update
openclaw update status
openclaw update --channel stable|beta|dev
openclaw update --tag <dist-tag|version>
openclaw update --no-restart
If you must automate from an agent:
openclaw update --yes --no-restart
openclaw gateway restart
Docs: Update, Updating.

What does the onboarding wizard actually do

openclaw onboard is the recommended setup path. In local mode it walks you through:
  • Model/auth setup (Anthropic setup-token recommended for Claude subscriptions, OpenAI Codex OAuth supported, API keys optional, LM Studio local models supported)
  • Workspace location + bootstrap files
  • Gateway settings (bind/port/auth/tailscale)
  • Providers (WhatsApp, Telegram, Discord, Mattermost (plugin), Signal, iMessage)
  • Daemon install (LaunchAgent on macOS; systemd user unit on Linux/WSL2)
  • Health checks and skills selection
It also warns if your configured model is unknown or missing auth.

Do I need a Claude or OpenAI subscription to run this

No. You can run OpenClaw with API keys (Anthropic/OpenAI/others) or with local‑only models so your data stays on your device. Subscriptions (Claude Pro/Max or OpenAI Codex) are optional ways to authenticate those providers. Docs: Anthropic, OpenAI, Local models, Models.

Can I use Claude Max subscription without an API key

Yes. You can authenticate with a setup-token instead of an API key. This is the subscription path. Claude Pro/Max subscriptions do not include an API key, so this is the correct approach for subscription accounts. Important: you must verify with Anthropic that this usage is allowed under their subscription policy and terms. If you want the most explicit, supported path, use an Anthropic API key.

How does Anthropic setuptoken auth work

claude setup-token generates a token string via the Claude Code CLI (it is not available in the web console). You can run it on any machine. Choose Anthropic token (paste setup-token) in the wizard or paste it with openclaw models auth paste-token --provider anthropic. The token is stored as an auth profile for the anthropic provider and used like an API key (no auto-refresh). More detail: OAuth.

Where do I find an Anthropic setuptoken

It is not in the Anthropic Console. The setup-token is generated by the Claude Code CLI on any machine:
claude setup-token
Copy the token it prints, then choose Anthropic token (paste setup-token) in the wizard. If you want to run it on the gateway host, use openclaw models auth setup-token --provider anthropic. If you ran claude setup-token elsewhere, paste it on the gateway host with openclaw models auth paste-token --provider anthropic. See Anthropic.

Do you support Claude subscription auth (Claude Pro/Max)

Yes — via setup-token. OpenClaw no longer reuses Claude Code CLI OAuth tokens; use a setup-token or an Anthropic API key. Generate the token anywhere and paste it on the gateway host. See Anthropic and OAuth. Note: Claude subscription access is governed by Anthropic’s terms. For production or multi‑user workloads, API keys are usually the safer choice.

Why am I seeing HTTP 429 ratelimiterror from Anthropic

That means your Anthropic quota/rate limit is exhausted for the current window. If you use a Claude subscription (setup‑token or Claude Code OAuth), wait for the window to reset or upgrade your plan. If you use an Anthropic API key, check the Anthropic Console for usage/billing and raise limits as needed. Tip: set a fallback model so OpenClaw can keep replying while a provider is rate‑limited. See Models and OAuth.

Is AWS Bedrock supported

Yes - via pi‑ai’s Amazon Bedrock (Converse) provider with manual config. You must supply AWS credentials/region on the gateway host and add a Bedrock provider entry in your models config. See Amazon Bedrock and Model providers. If you prefer a managed key flow, an OpenAI‑compatible proxy in front of Bedrock is still a valid option.

How does Codex auth work

OpenClaw supports OpenAI Code (Codex) via OAuth (ChatGPT sign-in). The wizard can run the OAuth flow and will set the default model to openai-codex/gpt-5.2 when appropriate. See Model providers and Wizard.

Do you support OpenAI subscription auth Codex OAuth

Yes. OpenClaw fully supports OpenAI Code (Codex) subscription OAuth. The onboarding wizard can run the OAuth flow for you. See OAuth, Model providers, and Wizard.

How do I set up Gemini CLI OAuth

Gemini CLI uses a plugin auth flow, not a client id or secret in openclaw.json. Steps:
  1. Enable the plugin: openclaw plugins enable google-gemini-cli-auth
  2. Login: openclaw models auth login --provider google-gemini-cli --set-default
This stores OAuth tokens in auth profiles on the gateway host. Details: Model providers.

Is a local model OK for casual chats

Usually no. OpenClaw needs large context + strong safety; small cards truncate and leak. If you must, run the largest MiniMax M2.1 build you can locally (LM Studio) and see /gateway/local-models. Smaller/quantized models increase prompt-injection risk - see Security.

How do I keep hosted model traffic in a specific region

Pick region-pinned endpoints. OpenRouter exposes US-hosted options for MiniMax, Kimi, and GLM; choose the US-hosted variant to keep data in-region. You can still list Anthropic/OpenAI alongside these by using models.mode: "merge" so fallbacks stay available while respecting the regioned provider you select.

Do I have to buy a Mac Mini to install this

No. OpenClaw runs on macOS or Linux (Windows via WSL2). A Mac mini is optional - some people buy one as an always‑on host, but a small VPS, home server, or Raspberry Pi‑class box works too. You only need a Mac for macOS‑only tools. For iMessage, you can keep the Gateway on Linux and run imsg on any Mac over SSH by pointing channels.imessage.cliPath at an SSH wrapper. If you want other macOS‑only tools, run the Gateway on a Mac or pair a macOS node. Docs: iMessage, Nodes, Mac remote mode.

Do I need a Mac mini for iMessage support

You need some macOS device signed into Messages. It does not have to be a Mac mini - any Mac works. OpenClaw’s iMessage integrations run on macOS (BlueBubbles or imsg), while the Gateway can run elsewhere. Common setups:
  • Run the Gateway on Linux/VPS, and point channels.imessage.cliPath at an SSH wrapper that runs imsg on the Mac.
  • Run everything on the Mac if you want the simplest single‑machine setup.
Docs: iMessage, BlueBubbles, Mac remote mode.

If I buy a Mac mini to run OpenClaw can I connect it to my MacBook Pro

Yes. The Mac mini can run the Gateway, and your MacBook Pro can connect as a node (companion device). Nodes don’t run the Gateway - they provide extra capabilities like screen/camera/canvas and system.run on that device. Common pattern:
  • Gateway on the Mac mini (always‑on).
  • MacBook Pro runs the macOS app or a node host and pairs to the Gateway.
  • Use openclaw nodes status / openclaw nodes list to see it.
Docs: Nodes, Nodes CLI.

Can I use Bun

Bun is not recommended. We see runtime bugs, especially with WhatsApp and Telegram. Use Node for stable gateways. If you still want to experiment with Bun, do it on a non‑production gateway without WhatsApp/Telegram.

Telegram what goes in allowFrom

channels.telegram.allowFrom is the human sender’s Telegram user ID (numeric, recommended) or @username. It is not the bot username. Safer (no third-party bot):
  • DM your bot, then run openclaw logs --follow and read from.id.
Official Bot API:
  • DM your bot, then call https://api.telegram.org/bot<bot_token>/getUpdates and read message.from.id.
Third-party (less private):
  • DM @userinfobot or @getidsbot.
See /channels/telegram.

Can multiple people use one WhatsApp number with different OpenClaw instances

Yes, via multi‑agent routing. Bind each sender’s WhatsApp DM (peer kind: "dm", sender E.164 like +15551234567) to a different agentId, so each person gets their own workspace and session store. Replies still come from the same WhatsApp account, and DM access control (channels.whatsapp.dmPolicy / channels.whatsapp.allowFrom) is global per WhatsApp account. See Multi-Agent Routing and WhatsApp.

Can I run a fast chat agent and an Opus for coding agent

Yes. Use multi‑agent routing: give each agent its own default model, then bind inbound routes (provider account or specific peers) to each agent. Example config lives in Multi-Agent Routing. See also Models and Configuration.

Does Homebrew work on Linux

Yes. Homebrew supports Linux (Linuxbrew). Quick setup:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
echo 'eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"' >> ~/.profile
eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"
brew install <formula>
If you run OpenClaw via systemd, ensure the service PATH includes /home/linuxbrew/.linuxbrew/bin (or your brew prefix) so brew-installed tools resolve in non‑login shells. Recent builds also prepend common user bin dirs on Linux systemd services (for example ~/.local/bin, ~/.npm-global/bin, ~/.local/share/pnpm, ~/.bun/bin) and honor PNPM_HOME, NPM_CONFIG_PREFIX, BUN_INSTALL, VOLTA_HOME, ASDF_DATA_DIR, NVM_DIR, and FNM_DIR when set.

Whats the difference between the hackable git install and npm install

  • Hackable (git) install: full source checkout, editable, best for contributors. You run builds locally and can patch code/docs.
  • npm install: global CLI install, no repo, best for “just run it.” Updates come from npm dist‑tags.
Docs: Getting started, Updating.

Can I switch between npm and git installs later

Yes. Install the other flavor, then run Doctor so the gateway service points at the new entrypoint. This does not delete your data - it only changes the OpenClaw code install. Your state (~/.openclaw) and workspace (~/.openclaw/workspace) stay untouched. From npm → git:
git clone https://github.com/openclaw/openclaw.git
cd openclaw
pnpm install
pnpm build
openclaw doctor
openclaw gateway restart
From git → npm:
npm install -g openclaw@latest
openclaw doctor
openclaw gateway restart
Doctor detects a gateway service entrypoint mismatch and offers to rewrite the service config to match the current install (use --repair in automation). Backup tips: see Backup strategy.

Should I run the Gateway on my laptop or a VPS

Short answer: if you want 24/7 reliability, use a VPS. If you want the lowest friction and you’re okay with sleep/restarts, run it locally. Laptop (local Gateway)
  • Pros: no server cost, direct access to local files, live browser window.
  • Cons: sleep/network drops = disconnects, OS updates/reboots interrupt, must stay awake.
VPS / cloud
  • Pros: always‑on, stable network, no laptop sleep issues, easier to keep running.
  • Cons: often run headless (use screenshots), remote file access only, you must SSH for updates.
OpenClaw-specific note: WhatsApp/Telegram/Slack/Mattermost (plugin)/Discord all work fine from a VPS. The only real trade-off is headless browser vs a visible window. See Browser. Recommended default: VPS if you had gateway disconnects before. Local is great when you’re actively using the Mac and want local file access or UI automation with a visible browser.

How important is it to run OpenClaw on a dedicated machine

Not required, but recommended for reliability and isolation.
  • Dedicated host (VPS/Mac mini/Pi): always‑on, fewer sleep/reboot interruptions, cleaner permissions, easier to keep running.
  • Shared laptop/desktop: totally fine for testing and active use, but expect pauses when the machine sleeps or updates.
If you want the best of both worlds, keep the Gateway on a dedicated host and pair your laptop as a node for local screen/camera/exec tools. See Nodes. For security guidance, read Security. OpenClaw is lightweight. For a basic Gateway + one chat channel:
  • Absolute minimum: 1 vCPU, 1GB RAM, ~500MB disk.
  • Recommended: 1-2 vCPU, 2GB RAM or more for headroom (logs, media, multiple channels). Node tools and browser automation can be resource hungry.
OS: use Ubuntu LTS (or any modern Debian/Ubuntu). The Linux install path is best tested there. Docs: Linux, VPS hosting.

Can I run OpenClaw in a VM and what are the requirements

Yes. Treat a VM the same as a VPS: it needs to be always on, reachable, and have enough RAM for the Gateway and any channels you enable. Baseline guidance:
  • Absolute minimum: 1 vCPU, 1GB RAM.
  • Recommended: 2GB RAM or more if you run multiple channels, browser automation, or media tools.
  • OS: Ubuntu LTS or another modern Debian/Ubuntu.
If you are on Windows, WSL2 is the easiest VM style setup and has the best tooling compatibility. See Windows, VPS hosting. If you are running macOS in a VM, see macOS VM.

What is OpenClaw?

What is OpenClaw in one paragraph

OpenClaw is a personal AI assistant you run on your own devices. It replies on the messaging surfaces you already use (WhatsApp, Telegram, Slack, Mattermost (plugin), Discord, Google Chat, Signal, iMessage, WebChat) and can also do voice + a live Canvas on supported platforms. The Gateway is the always-on control plane; the assistant is the product.

Whats the value proposition

OpenClaw is not “just a Claude wrapper.” It’s a local-first control plane that lets you run a capable assistant on your own hardware, reachable from the chat apps you already use, with stateful sessions, memory, and tools - without handing control of your workflows to a hosted SaaS. Highlights:
  • Your devices, your data: run the Gateway wherever you want (Mac, Linux, VPS) and keep the workspace + session history local.
  • Real channels, not a web sandbox: WhatsApp/Telegram/Slack/Discord/Signal/iMessage/etc, plus mobile voice and Canvas on supported platforms.
  • Model-agnostic: use Anthropic, OpenAI, MiniMax, OpenRouter, etc., with per‑agent routing and failover.
  • Local-only option: run local models so all data can stay on your device if you want.
  • Multi-agent routing: separate agents per channel, account, or task, each with its own workspace and defaults.
  • Open source and hackable: inspect, extend, and self-host without vendor lock‑in.
Docs: Gateway, Channels, Multi‑agent, Memory.

I just set it up what should I do first

Good first projects:
  • Build a website (WordPress, Shopify, or a simple static site).
  • Prototype a mobile app (outline, screens, API plan).
  • Organize files and folders (cleanup, naming, tagging).
  • Connect Gmail and automate summaries or follow ups.
It can handle large tasks, but it works best when you split them into phases and use sub agents for parallel work.

What are the top five everyday use cases for OpenClaw

Everyday wins usually look like:
  • Personal briefings: summaries of inbox, calendar, and news you care about.
  • Research and drafting: quick research, summaries, and first drafts for emails or docs.
  • Reminders and follow ups: cron or heartbeat driven nudges and checklists.
  • Browser automation: filling forms, collecting data, and repeating web tasks.
  • Cross device coordination: send a task from your phone, let the Gateway run it on a server, and get the result back in chat.

Can OpenClaw help with lead gen outreach ads and blogs for a SaaS

Yes for research, qualification, and drafting. It can scan sites, build shortlists, summarize prospects, and write outreach or ad copy drafts. For outreach or ad runs, keep a human in the loop. Avoid spam, follow local laws and platform policies, and review anything before it is sent. The safest pattern is to let OpenClaw draft and you approve. Docs: Security.

What are the advantages vs Claude Code for web development

OpenClaw is a personal assistant and coordination layer, not an IDE replacement. Use Claude Code or Codex for the fastest direct coding loop inside a repo. Use OpenClaw when you want durable memory, cross-device access, and tool orchestration. Advantages:
  • Persistent memory + workspace across sessions
  • Multi-platform access (WhatsApp, Telegram, TUI, WebChat)
  • Tool orchestration (browser, files, scheduling, hooks)
  • Always-on Gateway (run on a VPS, interact from anywhere)
  • Nodes for local browser/screen/camera/exec
Showcase: https://openclaw.ai/showcase

Skills and automation

How do I customize skills without keeping the repo dirty

Use managed overrides instead of editing the repo copy. Put your changes in ~/.openclaw/skills/<name>/SKILL.md (or add a folder via skills.load.extraDirs in ~/.openclaw/openclaw.json). Precedence is <workspace>/skills > ~/.openclaw/skills > bundled, so managed overrides win without touching git. Only upstream-worthy edits should live in the repo and go out as PRs.

Can I load skills from a custom folder

Yes. Add extra directories via skills.load.extraDirs in ~/.openclaw/openclaw.json (lowest precedence). Default precedence remains: <workspace>/skills~/.openclaw/skills → bundled → skills.load.extraDirs. clawdhub installs into ./skills by default, which OpenClaw treats as <workspace>/skills.

How can I use different models for different tasks

Today the supported patterns are:
  • Cron jobs: isolated jobs can set a model override per job.
  • Sub-agents: route tasks to separate agents with different default models.
  • On-demand switch: use /model to switch the current session model at any time.
See Cron jobs, Multi-Agent Routing, and Slash commands.

The bot freezes while doing heavy work How do I offload that

Use sub-agents for long or parallel tasks. Sub-agents run in their own session, return a summary, and keep your main chat responsive. Ask your bot to “spawn a sub-agent for this task” or use /subagents. Use /status in chat to see what the Gateway is doing right now (and whether it is busy). Token tip: long tasks and sub-agents both consume tokens. If cost is a concern, set a cheaper model for sub-agents via agents.defaults.subagents.model. Docs: Sub-agents.

Cron or reminders do not fire What should I check

Cron runs inside the Gateway process. If the Gateway is not running continuously, scheduled jobs will not run. Checklist:
  • Confirm cron is enabled (cron.enabled) and OPENCLAW_SKIP_CRON is not set.
  • Check the Gateway is running 24/7 (no sleep/restarts).
  • Verify timezone settings for the job (--tz vs host timezone).
Debug:
openclaw cron run <jobId> --force
openclaw cron runs --id <jobId> --limit 50
Docs: Cron jobs, Cron vs Heartbeat.

How do I install skills on Linux

Use ClawdHub (CLI) or drop skills into your workspace. The macOS Skills UI isn’t available on Linux. Browse skills at https://clawdhub.com. Install the ClawdHub CLI (pick one package manager):
npm i -g clawdhub
pnpm add -g clawdhub

Can OpenClaw run tasks on a schedule or continuously in the background

Yes. Use the Gateway scheduler:
  • Cron jobs for scheduled or recurring tasks (persist across restarts).
  • Heartbeat for “main session” periodic checks.
  • Isolated jobs for autonomous agents that post summaries or deliver to chats.
Docs: Cron jobs, Cron vs Heartbeat, Heartbeat. Can I run Apple macOS only skills from Linux Not directly. macOS skills are gated by metadata.openclaw.os plus required binaries, and skills only appear in the system prompt when they are eligible on the Gateway host. On Linux, darwin-only skills (like imsg, apple-notes, apple-reminders) will not load unless you override the gating. You have three supported patterns: Option A - run the Gateway on a Mac (simplest).
Run the Gateway where the macOS binaries exist, then connect from Linux in remote mode or over Tailscale. The skills load normally because the Gateway host is macOS.
Option B - use a macOS node (no SSH).
Run the Gateway on Linux, pair a macOS node (menubar app), and set Node Run Commands to “Always Ask” or “Always Allow” on the Mac. OpenClaw can treat macOS-only skills as eligible when the required binaries exist on the node. The agent runs those skills via the nodes tool. If you choose “Always Ask”, approving “Always Allow” in the prompt adds that command to the allowlist.
Option C - proxy macOS binaries over SSH (advanced).
Keep the Gateway on Linux, but make the required CLI binaries resolve to SSH wrappers that run on a Mac. Then override the skill to allow Linux so it stays eligible.
  1. Create an SSH wrapper for the binary (example: imsg):
    #!/usr/bin/env bash
    set -euo pipefail
    exec ssh -T user@mac-host /opt/homebrew/bin/imsg "$@"
    
  2. Put the wrapper on PATH on the Linux host (for example ~/bin/imsg).
  3. Override the skill metadata (workspace or ~/.openclaw/skills) to allow Linux:
    ---
    name: imsg
    description: iMessage/SMS CLI for listing chats, history, watch, and sending.
    metadata: {"openclaw":{"os":["darwin","linux"],"requires":{"bins":["imsg"]}}}
    ---
    
  4. Start a new session so the skills snapshot refreshes.
For iMessage specifically, you can also point channels.imessage.cliPath at an SSH wrapper (OpenClaw only needs stdio). See iMessage.

Do you have a Notion or HeyGen integration

Not built‑in today. Options:
  • Custom skill / plugin: best for reliable API access (Notion/HeyGen both have APIs).
  • Browser automation: works without code but is slower and more fragile.
If you want to keep context per client (agency workflows), a simple pattern is:
  • One Notion page per client (context + preferences + active work).
  • Ask the agent to fetch that page at the start of a session.
If you want a native integration, open a feature request or build a skill targeting those APIs. Install skills:
clawdhub install <skill-slug>
clawdhub update --all
ClawdHub installs into ./skills under your current directory (or falls back to your configured OpenClaw workspace); OpenClaw treats that as <workspace>/skills on the next session. For shared skills across agents, place them in ~/.openclaw/skills/<name>/SKILL.md. Some skills expect binaries installed via Homebrew; on Linux that means Linuxbrew (see the Homebrew Linux FAQ entry above). See Skills and ClawdHub.

How do I install the Chrome extension for browser takeover

Use the built-in installer, then load the unpacked extension in Chrome:
openclaw browser extension install
openclaw browser extension path
Then Chrome → chrome://extensions → enable “Developer mode” → “Load unpacked” → pick that folder. Full guide (including remote Gateway + security notes): Chrome extension If the Gateway runs on the same machine as Chrome (default setup), you usually do not need anything extra. If the Gateway runs elsewhere, run a node host on the browser machine so the Gateway can proxy browser actions. You still need to click the extension button on the tab you want to control (it doesn’t auto-attach).

Sandboxing and memory

Is there a dedicated sandboxing doc

Yes. See Sandboxing. For Docker-specific setup (full gateway in Docker or sandbox images), see Docker. Can I keep DMs personal but make groups public sandboxed with one agent Yes - if your private traffic is DMs and your public traffic is groups. Use agents.defaults.sandbox.mode: "non-main" so group/channel sessions (non-main keys) run in Docker, while the main DM session stays on-host. Then restrict what tools are available in sandboxed sessions via tools.sandbox.tools. Setup walkthrough + example config: Groups: personal DMs + public groups Key config reference: Gateway configuration

How do I bind a host folder into the sandbox

Set agents.defaults.sandbox.docker.binds to ["host:path:mode"] (e.g., "/home/user/src:/src:ro"). Global + per-agent binds merge; per-agent binds are ignored when scope: "shared". Use :ro for anything sensitive and remember binds bypass the sandbox filesystem walls. See Sandboxing and Sandbox vs Tool Policy vs Elevated for examples and safety notes.

How does memory work

OpenClaw memory is just Markdown files in the agent workspace:
  • Daily notes in memory/YYYY-MM-DD.md
  • Curated long-term notes in MEMORY.md (main/private sessions only)
OpenClaw also runs a silent pre-compaction memory flush to remind the model to write durable notes before auto-compaction. This only runs when the workspace is writable (read-only sandboxes skip it). See Memory.

Memory keeps forgetting things How do I make it stick

Ask the bot to write the fact to memory. Long-term notes belong in MEMORY.md, short-term context goes into memory/YYYY-MM-DD.md. This is still an area we are improving. It helps to remind the model to store memories; it will know what to do. If it keeps forgetting, verify the Gateway is using the same workspace on every run. Docs: Memory, Agent workspace.

Does semantic memory search require an OpenAI API key

Only if you use OpenAI embeddings. Codex OAuth covers chat/completions and does not grant embeddings access, so signing in with Codex (OAuth or the Codex CLI login) does not help for semantic memory search. OpenAI embeddings still need a real API key (OPENAI_API_KEY or models.providers.openai.apiKey). If you don’t set a provider explicitly, OpenClaw auto-selects a provider when it can resolve an API key (auth profiles, models.providers.*.apiKey, or env vars). It prefers OpenAI if an OpenAI key resolves, otherwise Gemini if a Gemini key resolves. If neither key is available, memory search stays disabled until you configure it. If you have a local model path configured and present, OpenClaw prefers local. If you’d rather stay local, set memorySearch.provider = "local" (and optionally memorySearch.fallback = "none"). If you want Gemini embeddings, set memorySearch.provider = "gemini" and provide GEMINI_API_KEY (or memorySearch.remote.apiKey). We support OpenAI, Gemini, or local embedding models - see Memory for the setup details.

Does memory persist forever What are the limits

Memory files live on disk and persist until you delete them. The limit is your storage, not the model. The session context is still limited by the model context window, so long conversations can compact or truncate. That is why memory search exists - it pulls only the relevant parts back into context. Docs: Memory, Context.

Where things live on disk

Is all data used with OpenClaw saved locally

No - OpenClaw’s state is local, but external services still see what you send them.
  • Local by default: sessions, memory files, config, and workspace live on the Gateway host (~/.openclaw + your workspace directory).
  • Remote by necessity: messages you send to model providers (Anthropic/OpenAI/etc.) go to their APIs, and chat platforms (WhatsApp/Telegram/Slack/etc.) store message data on their servers.
  • You control the footprint: using local models keeps prompts on your machine, but channel traffic still goes through the channel’s servers.
Related: Agent workspace, Memory.

Where does OpenClaw store its data

Everything lives under $OPENCLAW_STATE_DIR (default: ~/.openclaw):
PathPurpose
$OPENCLAW_STATE_DIR/openclaw.jsonMain config (JSON5)
$OPENCLAW_STATE_DIR/credentials/oauth.jsonLegacy OAuth import (copied into auth profiles on first use)
$OPENCLAW_STATE_DIR/agents/<agentId>/agent/auth-profiles.jsonAuth profiles (OAuth + API keys)
$OPENCLAW_STATE_DIR/agents/<agentId>/agent/auth.jsonRuntime auth cache (managed automatically)
$OPENCLAW_STATE_DIR/credentials/Provider state (e.g. whatsapp/<accountId>/creds.json)
$OPENCLAW_STATE_DIR/agents/Per‑agent state (agentDir + sessions)
$OPENCLAW_STATE_DIR/agents/<agentId>/sessions/Conversation history & state (per agent)
$OPENCLAW_STATE_DIR/agents/<agentId>/sessions/sessions.jsonSession metadata (per agent)
Legacy single‑agent path: ~/.openclaw/agent/* (migrated by openclaw doctor). Your workspace (AGENTS.md, memory files, skills, etc.) is separate and configured via agents.defaults.workspace (default: ~/.openclaw/workspace).

Where should AGENTSmd SOULmd USERmd MEMORYmd live

These files live in the agent workspace, not ~/.openclaw.
  • Workspace (per agent): AGENTS.md, SOUL.md, IDENTITY.md, USER.md, MEMORY.md (or memory.md), memory/YYYY-MM-DD.md, optional HEARTBEAT.md.
  • State dir (~/.openclaw): config, credentials, auth profiles, sessions, logs, and shared skills (~/.openclaw/skills).
Default workspace is ~/.openclaw/workspace, configurable via:
{
  agents: { defaults: { workspace: "~/.openclaw/workspace" } }
}
If the bot “forgets” after a restart, confirm the Gateway is using the same workspace on every launch (and remember: remote mode uses the gateway host’s workspace, not your local laptop). Tip: if you want a durable behavior or preference, ask the bot to write it into AGENTS.md or MEMORY.md rather than relying on chat history. See Agent workspace and Memory. Put your agent workspace in a private git repo and back it up somewhere private (for example GitHub private). This captures memory + AGENTS/SOUL/USER files, and lets you restore the assistant’s “mind” later. Do not commit anything under ~/.openclaw (credentials, sessions, tokens). If you need a full restore, back up both the workspace and the state directory separately (see the migration question above). Docs: Agent workspace.

How do I completely uninstall OpenClaw

See the dedicated guide: Uninstall.

Can agents work outside the workspace

Yes. The workspace is the default cwd and memory anchor, not a hard sandbox. Relative paths resolve inside the workspace, but absolute paths can access other host locations unless sandboxing is enabled. If you need isolation, use agents.defaults.sandbox or per‑agent sandbox settings. If you want a repo to be the default working directory, point that agent’s workspace to the repo root. The OpenClaw repo is just source code; keep the workspace separate unless you intentionally want the agent to work inside it. Example (repo as default cwd):
{
  agents: {
    defaults: {
      workspace: "~/Projects/my-repo"
    }
  }
}

Im in remote mode where is the session store

Session state is owned by the gateway host. If you’re in remote mode, the session store you care about is on the remote machine, not your local laptop. See Session management.

Config basics

What format is the config Where is it

OpenClaw reads an optional JSON5 config from $OPENCLAW_CONFIG_PATH (default: ~/.openclaw/openclaw.json):
$OPENCLAW_CONFIG_PATH
If the file is missing, it uses safe‑ish defaults (including a default workspace of ~/.openclaw/workspace).

I set gatewaybind lan or tailnet and now nothing listens the UI says unauthorized

Non-loopback binds require auth. Configure gateway.auth.mode + gateway.auth.token (or use OPENCLAW_GATEWAY_TOKEN).
{
  gateway: {
    bind: "lan",
    auth: {
      mode: "token",
      token: "replace-me"
    }
  }
}
Notes:
  • gateway.remote.token is for remote CLI calls only; it does not enable local gateway auth.
  • The Control UI authenticates via connect.params.auth.token (stored in app/UI settings). Avoid putting tokens in URLs.

Why do I need a token on localhost now

The wizard generates a gateway token by default (even on loopback) so local WS clients must authenticate. This blocks other local processes from calling the Gateway. Paste the token into the Control UI settings (or your client config) to connect. If you really want open loopback, remove gateway.auth from your config. Doctor can generate a token for you any time: openclaw doctor --generate-gateway-token.

Do I have to restart after changing config

The Gateway watches the config and supports hot‑reload:
  • gateway.reload.mode: "hybrid" (default): hot‑apply safe changes, restart for critical ones
  • hot, restart, off are also supported

How do I enable web search and web fetch

web_fetch works without an API key. web_search requires a Brave Search API key. Recommended: run openclaw configure --section web to store it in tools.web.search.apiKey. Environment alternative: set BRAVE_API_KEY for the Gateway process.
{
  tools: {
    web: {
      search: {
        enabled: true,
        apiKey: "BRAVE_API_KEY_HERE",
        maxResults: 5
      },
      fetch: {
        enabled: true
      }
    }
  }
}
Notes:
  • If you use allowlists, add web_search/web_fetch or group:web.
  • web_fetch is enabled by default (unless explicitly disabled).
  • Daemons read env vars from ~/.openclaw/.env (or the service environment).
Docs: Web tools.

How do I run a central Gateway with specialized workers across devices

The common pattern is one Gateway (e.g. Raspberry Pi) plus nodes and agents:
  • Gateway (central): owns channels (Signal/WhatsApp), routing, and sessions.
  • Nodes (devices): Macs/iOS/Android connect as peripherals and expose local tools (system.run, canvas, camera).
  • Agents (workers): separate brains/workspaces for special roles (e.g. “Hetzner ops”, “Personal data”).
  • Sub‑agents: spawn background work from a main agent when you want parallelism.
  • TUI: connect to the Gateway and switch agents/sessions.
Docs: Nodes, Remote access, Multi-Agent Routing, Sub-agents, TUI.

Can the OpenClaw browser run headless

Yes. It’s a config option:
{
  browser: { headless: true },
  agents: {
    defaults: {
      sandbox: { browser: { headless: true } }
    }
  }
}
Default is false (headful). Headless is more likely to trigger anti‑bot checks on some sites. See Browser. Headless uses the same Chromium engine and works for most automation (forms, clicks, scraping, logins). The main differences:
  • No visible browser window (use screenshots if you need visuals).
  • Some sites are stricter about automation in headless mode (CAPTCHAs, anti‑bot). For example, X/Twitter often blocks headless sessions.

How do I use Brave for browser control

Set browser.executablePath to your Brave binary (or any Chromium-based browser) and restart the Gateway. See the full config examples in Browser.

Remote gateways + nodes

How do commands propagate between Telegram the gateway and nodes

Telegram messages are handled by the gateway. The gateway runs the agent and only then calls nodes over the Gateway WebSocket when a node tool is needed: Telegram → Gateway → Agent → node.* → Node → Gateway → Telegram Nodes don’t see inbound provider traffic; they only receive node RPC calls.

How can my agent access my computer if the Gateway is hosted remotely

Short answer: pair your computer as a node. The Gateway runs elsewhere, but it can call node.* tools (screen, camera, system) on your local machine over the Gateway WebSocket. Typical setup:
  1. Run the Gateway on the always‑on host (VPS/home server).
  2. Put the Gateway host + your computer on the same tailnet.
  3. Ensure the Gateway WS is reachable (tailnet bind or SSH tunnel).
  4. Open the macOS app locally and connect in Remote over SSH mode (or direct tailnet) so it can register as a node.
  5. Approve the node on the Gateway:
    openclaw nodes pending
    openclaw nodes approve <requestId>
    
No separate TCP bridge is required; nodes connect over the Gateway WebSocket. Security reminder: pairing a macOS node allows system.run on that machine. Only pair devices you trust, and review Security. Docs: Nodes, Gateway protocol, macOS remote mode, Security.

Tailscale is connected but I get no replies What now

Check the basics:
  • Gateway is running: openclaw gateway status
  • Gateway health: openclaw status
  • Channel health: openclaw channels status
Then verify auth and routing:
  • If you use Tailscale Serve, make sure gateway.auth.allowTailscale is set correctly.
  • If you connect via SSH tunnel, confirm the local tunnel is up and points at the right port.
  • Confirm your allowlists (DM or group) include your account.
Docs: Tailscale, Remote access, Channels.

Can two OpenClaw instances talk to each other local VPS

Yes. There is no built-in “bot-to-bot” bridge, but you can wire it up in a few reliable ways: Simplest: use a normal chat channel both bots can access (Telegram/Slack/WhatsApp). Have Bot A send a message to Bot B, then let Bot B reply as usual. CLI bridge (generic): run a script that calls the other Gateway with openclaw agent --message ... --deliver, targeting a chat where the other bot listens. If one bot is on a remote VPS, point your CLI at that remote Gateway via SSH/Tailscale (see Remote access). Example pattern (run from a machine that can reach the target Gateway):
openclaw agent --message "Hello from local bot" --deliver --channel telegram --reply-to <chat-id>
Tip: add a guardrail so the two bots do not loop endlessly (mention-only, channel allowlists, or a “do not reply to bot messages” rule). Docs: Remote access, Agent CLI, Agent send.

Do I need separate VPSes for multiple agents

No. One Gateway can host multiple agents, each with its own workspace, model defaults, and routing. That is the normal setup and it is much cheaper and simpler than running one VPS per agent. Use separate VPSes only when you need hard isolation (security boundaries) or very different configs that you do not want to share. Otherwise, keep one Gateway and use multiple agents or sub-agents.

Is there a benefit to using a node on my personal laptop instead of SSH from a VPS

Yes - nodes are the first‑class way to reach your laptop from a remote Gateway, and they unlock more than shell access. The Gateway runs on macOS/Linux (Windows via WSL2) and is lightweight (a small VPS or Raspberry Pi-class box is fine; 4 GB RAM is plenty), so a common setup is an always‑on host plus your laptop as a node.
  • No inbound SSH required. Nodes connect out to the Gateway WebSocket and use device pairing.
  • Safer execution controls. system.run is gated by node allowlists/approvals on that laptop.
  • More device tools. Nodes expose canvas, camera, and screen in addition to system.run.
  • Local browser automation. Keep the Gateway on a VPS, but run Chrome locally and relay control with the Chrome extension + a node host on the laptop.
SSH is fine for ad‑hoc shell access, but nodes are simpler for ongoing agent workflows and device automation. Docs: Nodes, Nodes CLI, Chrome extension.

Should I install on a second laptop or just add a node

If you only need local tools (screen/camera/exec) on the second laptop, add it as a node. That keeps a single Gateway and avoids duplicated config. Local node tools are currently macOS-only, but we plan to extend them to other OSes. Install a second Gateway only when you need hard isolation or two fully separate bots. Docs: Nodes, Nodes CLI, Multiple gateways.

Do nodes run a gateway service

No. Only one gateway should run per host unless you intentionally run isolated profiles (see Multiple gateways). Nodes are peripherals that connect to the gateway (iOS/Android nodes, or macOS “node mode” in the menubar app). For headless node hosts and CLI control, see Node host CLI. A full restart is required for gateway, discovery, and canvasHost changes.

Is there an API RPC way to apply config

Yes. config.apply validates + writes the full config and restarts the Gateway as part of the operation.

configapply wiped my config How do I recover and avoid this

config.apply replaces the entire config. If you send a partial object, everything else is removed. Recover:
  • Restore from backup (git or a copied ~/.openclaw/openclaw.json).
  • If you have no backup, re-run openclaw doctor and reconfigure channels/models.
  • If this was unexpected, file a bug and include your last known config or any backup.
  • A local coding agent can often reconstruct a working config from logs or history.
Avoid it:
  • Use openclaw config set for small changes.
  • Use openclaw configure for interactive edits.
Docs: Config, Configure, Doctor.

Whats a minimal sane config for a first install

{
  agents: { defaults: { workspace: "~/.openclaw/workspace" } },
  channels: { whatsapp: { allowFrom: ["+15555550123"] } }
}
This sets your workspace and restricts who can trigger the bot.

How do I set up Tailscale on a VPS and connect from my Mac

Minimal steps:
  1. Install + login on the VPS
    curl -fsSL https://tailscale.com/install.sh | sh
    sudo tailscale up
    
  2. Install + login on your Mac
    • Use the Tailscale app and sign in to the same tailnet.
  3. Enable MagicDNS (recommended)
    • In the Tailscale admin console, enable MagicDNS so the VPS has a stable name.
  4. Use the tailnet hostname
    • SSH: ssh user@your-vps.tailnet-xxxx.ts.net
    • Gateway WS: ws://your-vps.tailnet-xxxx.ts.net:18789
If you want the Control UI without SSH, use Tailscale Serve on the VPS:
openclaw gateway --tailscale serve
This keeps the gateway bound to loopback and exposes HTTPS via Tailscale. See Tailscale.

How do I connect a Mac node to a remote Gateway Tailscale Serve

Serve exposes the Gateway Control UI + WS. Nodes connect over the same Gateway WS endpoint. Recommended setup:
  1. Make sure the VPS + Mac are on the same tailnet.
  2. Use the macOS app in Remote mode (SSH target can be the tailnet hostname). The app will tunnel the Gateway port and connect as a node.
  3. Approve the node on the gateway:
    openclaw nodes pending
    openclaw nodes approve <requestId>
    
Docs: Gateway protocol, Discovery, macOS remote mode.

Env vars and .env loading

How does OpenClaw load environment variables

OpenClaw reads env vars from the parent process (shell, launchd/systemd, CI, etc.) and additionally loads:
  • .env from the current working directory
  • a global fallback .env from ~/.openclaw/.env (aka $OPENCLAW_STATE_DIR/.env)
Neither .env file overrides existing env vars. You can also define inline env vars in config (applied only if missing from the process env):
{
  env: {
    OPENROUTER_API_KEY: "sk-or-...",
    vars: { GROQ_API_KEY: "gsk-..." }
  }
}
See /environment for full precedence and sources.

I started the Gateway via the service and my env vars disappeared What now

Two common fixes:
  1. Put the missing keys in ~/.openclaw/.env so they’re picked up even when the service doesn’t inherit your shell env.
  2. Enable shell import (opt‑in convenience):
{
  env: {
    shellEnv: {
      enabled: true,
      timeoutMs: 15000
    }
  }
}
This runs your login shell and imports only missing expected keys (never overrides). Env var equivalents: OPENCLAW_LOAD_SHELL_ENV=1, OPENCLAW_SHELL_ENV_TIMEOUT_MS=15000.

I set COPILOTGITHUBTOKEN but models status shows Shell env off Why

openclaw models status reports whether shell env import is enabled. “Shell env: off” does not mean your env vars are missing - it just means OpenClaw won’t load your login shell automatically. If the Gateway runs as a service (launchd/systemd), it won’t inherit your shell environment. Fix by doing one of these:
  1. Put the token in ~/.openclaw/.env:
    COPILOT_GITHUB_TOKEN=...
    
  2. Or enable shell import (env.shellEnv.enabled: true).
  3. Or add it to your config env block (applies only if missing).
Then restart the gateway and recheck:
openclaw models status
Copilot tokens are read from COPILOT_GITHUB_TOKEN (also GH_TOKEN / GITHUB_TOKEN). See /concepts/model-providers and /environment.

Sessions & multiple chats

How do I start a fresh conversation

Send /new or /reset as a standalone message. See Session management.

Do sessions reset automatically if I never send new

Yes. Sessions expire after session.idleMinutes (default 60). The next message starts a fresh session id for that chat key. This does not delete transcripts - it just starts a new session.
{
  session: {
    idleMinutes: 240
  }
}

Is there a way to make a team of OpenClaw instances one CEO and many agents

Yes, via multi-agent routing and sub-agents. You can create one coordinator agent and several worker agents with their own workspaces and models. That said, this is best seen as a fun experiment. It is token heavy and often less efficient than using one bot with separate sessions. The typical model we envision is one bot you talk to, with different sessions for parallel work. That bot can also spawn sub-agents when needed. Docs: Multi-agent routing, Sub-agents, Agents CLI.

Why did context get truncated midtask How do I prevent it

Session context is limited by the model window. Long chats, large tool outputs, or many files can trigger compaction or truncation. What helps:
  • Ask the bot to summarize the current state and write it to a file.
  • Use /compact before long tasks, and /new when switching topics.
  • Keep important context in the workspace and ask the bot to read it back.
  • Use sub-agents for long or parallel work so the main chat stays smaller.
  • Pick a model with a larger context window if this happens often.

How do I completely reset OpenClaw but keep it installed

Use the reset command:
openclaw reset
Non-interactive full reset:
openclaw reset --scope full --yes --non-interactive
Then re-run onboarding:
openclaw onboard --install-daemon
Notes:
  • The onboarding wizard also offers Reset if it sees an existing config. See Wizard.
  • If you used profiles (--profile / OPENCLAW_PROFILE), reset each state dir (defaults are ~/.openclaw-<profile>).
  • Dev reset: openclaw gateway --dev --reset (dev-only; wipes dev config + credentials + sessions + workspace).

Im getting context too large errors how do I reset or compact

Use one of these:
  • Compact (keeps the conversation but summarizes older turns):
    /compact
    
    or /compact <instructions> to guide the summary.
  • Reset (fresh session ID for the same chat key):
    /new
    /reset
    
If it keeps happening:
  • Enable or tune session pruning (agents.defaults.contextPruning) to trim old tool output.
  • Use a model with a larger context window.
Docs: Compaction, Session pruning, Session management.

Why am I seeing LLM request rejected messagesNcontentXtooluseinput Field required

This is a provider validation error: the model emitted a tool_use block without the required input. It usually means the session history is stale or corrupted (often after long threads or a tool/schema change). Fix: start a fresh session with /new (standalone message).

Why am I getting heartbeat messages every 30 minutes

Heartbeats run every 30m by default. Tune or disable them:
{
  agents: {
    defaults: {
      heartbeat: {
        every: "2h"   // or "0m" to disable
      }
    }
  }
}
If HEARTBEAT.md exists but is effectively empty (only blank lines and markdown headers like # Heading), OpenClaw skips the heartbeat run to save API calls. If the file is missing, the heartbeat still runs and the model decides what to do. Per-agent overrides use agents.list[].heartbeat. Docs: Heartbeat.

Do I need to add a bot account to a WhatsApp group

No. OpenClaw runs on your own account, so if you’re in the group, OpenClaw can see it. By default, group replies are blocked until you allow senders (groupPolicy: "allowlist"). If you want only you to be able to trigger group replies:
{
  channels: {
    whatsapp: {
      groupPolicy: "allowlist",
      groupAllowFrom: ["+15551234567"]
    }
  }
}

How do I get the JID of a WhatsApp group

Option 1 (fastest): tail logs and send a test message in the group:
openclaw logs --follow --json
Look for chatId (or from) ending in @g.us, like: 1234567890-1234567890@g.us. Option 2 (if already configured/allowlisted): list groups from config:
openclaw directory groups list --channel whatsapp
Docs: WhatsApp, Directory, Logs.

Why doesnt OpenClaw reply in a group

Two common causes:
  • Mention gating is on (default). You must @mention the bot (or match mentionPatterns).
  • You configured channels.whatsapp.groups without "*" and the group isn’t allowlisted.
See Groups and Group messages.

Do groupsthreads share context with DMs

Direct chats collapse to the main session by default. Groups/channels have their own session keys, and Telegram topics / Discord threads are separate sessions. See Groups and Group messages.

How many workspaces and agents can I create

No hard limits. Dozens (even hundreds) are fine, but watch for:
  • Disk growth: sessions + transcripts live under ~/.openclaw/agents/<agentId>/sessions/.
  • Token cost: more agents means more concurrent model usage.
  • Ops overhead: per-agent auth profiles, workspaces, and channel routing.
Tips:
  • Keep one active workspace per agent (agents.defaults.workspace).
  • Prune old sessions (delete JSONL or store entries) if disk grows.
  • Use openclaw doctor to spot stray workspaces and profile mismatches.

Can I run multiple bots or chats at the same time Slack and how should I set that up

Yes. Use Multi‑Agent Routing to run multiple isolated agents and route inbound messages by channel/account/peer. Slack is supported as a channel and can be bound to specific agents. Browser access is powerful but not “do anything a human can” - anti‑bot, CAPTCHAs, and MFA can still block automation. For the most reliable browser control, use the Chrome extension relay on the machine that runs the browser (and keep the Gateway anywhere). Best‑practice setup:
  • Always‑on Gateway host (VPS/Mac mini).
  • One agent per role (bindings).
  • Slack channel(s) bound to those agents.
  • Local browser via extension relay (or a node) when needed.
Docs: Multi‑Agent Routing, Slack, Browser, Chrome extension, Nodes.

Models: defaults, selection, aliases, switching

What is the default model

OpenClaw’s default model is whatever you set as:
agents.defaults.model.primary
Models are referenced as provider/model (example: anthropic/claude-opus-4-5). If you omit the provider, OpenClaw currently assumes anthropic as a temporary deprecation fallback - but you should still explicitly set provider/model.

What model do you recommend

Recommended default: anthropic/claude-opus-4-5.
Good alternative: anthropic/claude-sonnet-4-5.
Reliable (less character): openai/gpt-5.2 - nearly as good as Opus, just less personality.
Budget: zai/glm-4.7.
MiniMax M2.1 has its own docs: MiniMax and Local models. Rule of thumb: use the best model you can afford for high-stakes work, and a cheaper model for routine chat or summaries. You can route models per agent and use sub-agents to parallelize long tasks (each sub-agent consumes tokens). See Models and Sub-agents. Strong warning: weaker/over-quantized models are more vulnerable to prompt injection and unsafe behavior. See Security. More context: Models.

Can I use selfhosted models llamacpp vLLM Ollama

Yes. If your local server exposes an OpenAI-compatible API, you can point a custom provider at it. Ollama is supported directly and is the easiest path. Security note: smaller or heavily quantized models are more vulnerable to prompt injection. We strongly recommend large models for any bot that can use tools. If you still want small models, enable sandboxing and strict tool allowlists. Docs: Ollama, Local models, Model providers, Security, Sandboxing.

How do I switch models without wiping my config

Use model commands or edit only the model fields. Avoid full config replaces. Safe options:
  • /model in chat (quick, per-session)
  • openclaw models set ... (updates just model config)
  • openclaw configure --section models (interactive)
  • edit agents.defaults.model in ~/.openclaw/openclaw.json
Avoid config.apply with a partial object unless you intend to replace the whole config. If you did overwrite config, restore from backup or re-run openclaw doctor to repair. Docs: Models, Configure, Config, Doctor.

What do OpenClaw, Flawd, and Krill use for models

  • OpenClaw + Flawd: Anthropic Opus (anthropic/claude-opus-4-5) - see Anthropic.
  • Krill: MiniMax M2.1 (minimax/MiniMax-M2.1) - see MiniMax.

How do I switch models on the fly without restarting

Use the /model command as a standalone message:
/model sonnet
/model haiku
/model opus
/model gpt
/model gpt-mini
/model gemini
/model gemini-flash
You can list available models with /model, /model list, or /model status. /model (and /model list) shows a compact, numbered picker. Select by number:
/model 3
You can also force a specific auth profile for the provider (per session):
/model opus@anthropic:default
/model opus@anthropic:work
Tip: /model status shows which agent is active, which auth-profiles.json file is being used, and which auth profile will be tried next. It also shows the configured provider endpoint (baseUrl) and API mode (api) when available. How do I unpin a profile I set with profile Re-run /model without the @profile suffix:
/model anthropic/claude-opus-4-5
If you want to return to the default, pick it from /model (or send /model <default provider/model>). Use /model status to confirm which auth profile is active.

Can I use GPT 5.2 for daily tasks and Codex 5.2 for coding

Yes. Set one as default and switch as needed:
  • Quick switch (per session): /model gpt-5.2 for daily tasks, /model gpt-5.2-codex for coding.
  • Default + switch: set agents.defaults.model.primary to openai-codex/gpt-5.2, then switch to openai-codex/gpt-5.2-codex when coding (or the other way around).
  • Sub-agents: route coding tasks to sub-agents with a different default model.
See Models and Slash commands.

Why do I see Model is not allowed and then no reply

If agents.defaults.models is set, it becomes the allowlist for /model and any session overrides. Choosing a model that isn’t in that list returns:
Model "provider/model" is not allowed. Use /model to list available models.
That error is returned instead of a normal reply. Fix: add the model to agents.defaults.models, remove the allowlist, or pick a model from /model list.

Why do I see Unknown model minimaxMiniMaxM21

This means the provider isn’t configured (no MiniMax provider config or auth profile was found), so the model can’t be resolved. A fix for this detection is in 2026.1.12 (unreleased at the time of writing). Fix checklist:
  1. Upgrade to 2026.1.12 (or run from source main), then restart the gateway.
  2. Make sure MiniMax is configured (wizard or JSON), or that a MiniMax API key exists in env/auth profiles so the provider can be injected.
  3. Use the exact model id (case‑sensitive): minimax/MiniMax-M2.1 or minimax/MiniMax-M2.1-lightning.
  4. Run:
    openclaw models list
    
    and pick from the list (or /model list in chat).
See MiniMax and Models.

Can I use MiniMax as my default and OpenAI for complex tasks

Yes. Use MiniMax as the default and switch models per session when needed. Fallbacks are for errors, not “hard tasks,” so use /model or a separate agent. Option A: switch per session
{
  env: { MINIMAX_API_KEY: "sk-...", OPENAI_API_KEY: "sk-..." },
  agents: {
    defaults: {
      model: { primary: "minimax/MiniMax-M2.1" },
      models: {
        "minimax/MiniMax-M2.1": { alias: "minimax" },
        "openai/gpt-5.2": { alias: "gpt" }
      }
    }
  }
}
然后:
/model gpt
Option B: separate agents
  • Agent A 默认:MiniMax
  • Agent B 默认:OpenAI
  • 按 agent 路由或使用 /agent 切换
Docs: Models, Multi-Agent Routing, MiniMax, OpenAI.

Are opus sonnet gpt builtin shortcuts

是的。OpenClaw 内置了一些默认快捷别名(仅在模型存在于 agents.defaults.models 时生效):
  • opusanthropic/claude-opus-4-5
  • sonnetanthropic/claude-sonnet-4-5
  • gptopenai/gpt-5.2
  • gpt-miniopenai/gpt-5-mini
  • geminigoogle/gemini-3-pro-preview
  • gemini-flashgoogle/gemini-3-flash-preview
如果你用同名自定义 alias,你的值会覆盖默认值。

How do I defineoverride model shortcuts aliases

别名来自 agents.defaults.models.<modelId>.alias。例如:
{
  agents: {
    defaults: {
      model: { primary: "anthropic/claude-opus-4-5" },
      models: {
        "anthropic/claude-opus-4-5": { alias: "opus" },
        "anthropic/claude-sonnet-4-5": { alias: "sonnet" },
        "anthropic/claude-haiku-4-5": { alias: "haiku" }
      }
    }
  }
}
然后 /model sonnet(或在支持时使用 /<alias>)会解析为该模型 ID。

How do I add models from other providers like OpenRouter or ZAI

OpenRouter(按 token 计费;模型众多):
{
  agents: {
    defaults: {
      model: { primary: "openrouter/anthropic/claude-sonnet-4-5" },
      models: { "openrouter/anthropic/claude-sonnet-4-5": {} }
    }
  },
  env: { OPENROUTER_API_KEY: "sk-or-..." }
}
Z.AI(GLM 模型):
{
  agents: {
    defaults: {
      model: { primary: "zai/glm-4.7" },
      models: { "zai/glm-4.7": {} }
    }
  },
  env: { ZAI_API_KEY: "..." }
}
如果你引用了某个 provider/model,但缺少所需的 provider key,你会得到运行时 auth 错误(例如 No API key found for provider "zai")。 No API key found for provider after adding a new agent 这通常意味着新 agent的 auth 存储为空。Auth 是按 agent 分隔的,存储在:
~/.openclaw/agents/<agentId>/agent/auth-profiles.json
修复选项:
  • 运行 openclaw agents add <id> 并在向导中配置 auth。
  • 或从主 agent 的 agentDir 复制 auth-profiles.json 到新 agent 的 agentDir
不要在多个 agent 之间复用 agentDir;这会导致 auth/session 冲突。

Model failover and “All models failed”

How does failover work

Failover 分两步进行:
  1. 在同一 provider 内进行 Auth profile 轮换
  2. agents.defaults.model.fallbacks切换到下一个模型
失败的 profile 会进入冷却(指数退避),所以即便 provider 限流或暂时故障,OpenClaw 也能继续响应。

What does this error mean

No credentials found for profile "anthropic:default"
说明系统尝试使用 auth profile ID anthropic:default,但在预期的 auth 存储中找不到凭据。

Fix checklist for No credentials found for profile anthropicdefault

  • 确认 auth profiles 的位置(新路径 vs 旧路径)
    • 当前:~/.openclaw/agents/<agentId>/agent/auth-profiles.json
    • 旧路径:~/.openclaw/agent/*(由 openclaw doctor 迁移)
  • 确认你的环境变量被 Gateway 加载
    • 如果你在 shell 里设置了 ANTHROPIC_API_KEY,但通过 systemd/launchd 运行 Gateway,它可能不会继承。把它放进 ~/.openclaw/.env 或启用 env.shellEnv
  • 确保你在编辑正确的 agent
    • 多 agent 配置意味着会有多个 auth-profiles.json 文件。
  • 快速检查模型/auth 状态
    • 使用 openclaw models status 查看已配置模型以及 providers 的认证状态。
Fix checklist for No credentials found for profile anthropic 这表示运行被固定到某个 Anthropic auth profile,但 Gateway 在其 auth 存储中找不到它。
  • 使用 setup-token
    • 运行 claude setup-token,然后用 openclaw models auth setup-token --provider anthropic 粘贴。
    • 如果 token 是在另一台机器上创建的,使用 openclaw models auth paste-token --provider anthropic
  • 如果你想改用 API key
    • Gateway 主机~/.openclaw/.env 中设置 ANTHROPIC_API_KEY
    • 清除任何强制缺失 profile 的固定顺序:
      openclaw models auth order clear --provider anthropic
      
  • 确认你在 Gateway 主机上运行命令
    • 远程模式下,auth profiles 位于 Gateway 机器上,而不是你的笔记本。

Why did it also try Google Gemini and fail

如果你的模型配置包含 Google Gemini 作为 fallback(或你切换到了 Gemini 快捷名),OpenClaw 会在 fallback 期间尝试它。如果你没有配置 Google 凭据,就会看到 No API key found for provider "google" 修复:提供 Google auth,或从 agents.defaults.model.fallbacks / aliases 中移除或避免 Google 模型,防止 fallback 路由过去。 LLM request rejected message thinking signature required google antigravity 原因:会话历史包含没有签名的 thinking 块(通常来自被中止/部分流式的输出)。Google Antigravity 需要 thinking 块带签名。 修复:OpenClaw 现在会为 Google Antigravity Claude 清理未签名的 thinking 块。如果仍出现,开启新会话或为该 agent 设置 /thinking off

Auth profiles: what they are and how to manage them

相关:/concepts/oauth(OAuth 流程、token 存储、多账号模式)

What is an auth profile

Auth profile 是一个按 provider 绑定的命名凭据记录(OAuth 或 API key)。Profiles 存放在:
~/.openclaw/agents/<agentId>/agent/auth-profiles.json

What are typical profile IDs

OpenClaw 使用带 provider 前缀的 ID,例如:
  • anthropic:default(没有 email 身份时常见)
  • anthropic:<email> 用于 OAuth 身份
  • 你自定义的 ID(例如 anthropic:work

Can I control which auth profile is tried first

可以。配置支持为 profile 添加可选元数据,并为每个 provider 设置顺序(auth.order.<provider>)。它不存储 secrets;只是在 IDs 与 provider/mode 之间做映射并设置轮换顺序。 OpenClaw 可能会临时跳过处于短期冷却(限流/超时/auth 失败)或长期禁用(账单/额度不足)状态的 profile。要检查这些,运行 openclaw models status --json 并查看 auth.unusableProfiles。调优参数:auth.cooldowns.billingBackoffHours* 你还可以通过 CLI 设置按 agent 覆盖的顺序(存储在该 agent 的 auth-profiles.json 中):
# Defaults to the configured default agent (omit --agent)
openclaw models auth order get --provider anthropic

# Lock rotation to a single profile (only try this one)
openclaw models auth order set --provider anthropic anthropic:default

# Or set an explicit order (fallback within provider)
openclaw models auth order set --provider anthropic anthropic:work anthropic:default

# Clear override (fall back to config auth.order / round-robin)
openclaw models auth order clear --provider anthropic
要针对特定 agent:
openclaw models auth order set --provider anthropic --agent main anthropic:default

OAuth vs API key whats the difference

OpenClaw 同时支持两者:
  • OAuth 通常可利用订阅访问(如适用)。
  • API keys 使用按 token 计费。
向导明确支持 Anthropic setup-token 和 OpenAI Codex OAuth,并可为你存储 API keys。

Gateway: ports, “already running”, and remote mode

What port does the Gateway use

gateway.port 控制 WebSocket + HTTP 的单一复用端口(Control UI、hooks 等)。 优先级:
--port > OPENCLAW_GATEWAY_PORT > gateway.port > default 18789

Why does openclaw gateway status say Runtime running but RPC probe failed

因为“running”是监督进程(launchd/systemd/schtasks)的视角。RPC probe 是 CLI 实际连接 Gateway WebSocket 并调用 status 运行 openclaw gateway status,重点看这些行:
  • Probe target:(probe 实际使用的 URL)
  • Listening:(端口真正绑定在哪里)
  • Last gateway error:(进程活着但端口不监听时的常见根因)

Why does openclaw gateway status show Config cli and Config service different

你在编辑一个 config,但服务运行的是另一个(常见是 --profile / OPENCLAW_STATE_DIR 不匹配)。 修复:
openclaw gateway install --force
从你希望服务使用的同一 --profile / 环境运行该命令。

What does another gateway instance is already listening mean

OpenClaw 通过在启动时立刻绑定 WebSocket 监听器来实现运行锁(默认 ws://127.0.0.1:18789)。如果绑定失败并返回 EADDRINUSE,会抛出 GatewayLockError,表示已有实例在监听。 修复:停止其他实例、释放端口,或改用 openclaw gateway --port <port>

How do I run OpenClaw in remote mode client connects to a Gateway elsewhere

设置 gateway.mode: "remote" 并指向远程 WebSocket URL,可选 token/password:
{
  gateway: {
    mode: "remote",
    remote: {
      url: "ws://gateway.tailnet:18789",
      token: "your-token",
      password: "your-password"
    }
  }
}
说明:
  • openclaw gateway 只会在 gateway.modelocal 时启动(或你传了 override flag)。
  • macOS app 会监控配置文件,值变化时会实时切换模式。

The Control UI says unauthorized or keeps reconnecting What now

你的 Gateway 启用了 auth(gateway.auth.*),但 UI 没有发送匹配的 token/password。 事实(来自代码):
  • Control UI 把 token 存在浏览器 localStorage 的 openclaw.control.settings.v1
  • UI 能一次性导入 ?token=...(和/或 ?password=...),然后会从 URL 中移除。
修复:
  • 最快:openclaw dashboard(打印并复制带 token 的链接,尝试打开;无头环境会提示 SSH)。
  • 如果还没有 token:openclaw doctor --generate-gateway-token
  • 远程时先打隧道:ssh -N -L 18789:127.0.0.1:18789 user@host,然后打开 http://127.0.0.1:18789/?token=...
  • 在 Gateway 主机上设置 gateway.auth.token(或 OPENCLAW_GATEWAY_TOKEN)。
  • 在 Control UI 设置中粘贴同一个 token(或用一次性 ?token=... 链接刷新)。
  • 仍未解决?运行 openclaw status --all 并查看 TroubleshootingDashboard 有 auth 细节。

I set gatewaybind tailnet but it cant bind nothing listens

tailnet 绑定会从网卡里选择一个 Tailscale IP(100.64.0.0/10)。如果机器不在 Tailscale 上(或接口未启用),就没有可绑定的地址。 修复:
  • 在该主机上启动 Tailscale(使其获得 100.x 地址),或
  • 切换为 gateway.bind: "loopback" / "lan"
注意:tailnet 是显式指定的;auto 更偏向 loopback。需要仅 tailnet 绑定时使用 gateway.bind: "tailnet"

Can I run multiple Gateways on the same host

通常不需要:一个 Gateway 就能跑多个消息通道和 agents。只有在需要冗余(例如救援 bot)或强隔离时,才建议多个 Gateway。 可以,但必须隔离:
  • OPENCLAW_CONFIG_PATH(每实例 config)
  • OPENCLAW_STATE_DIR(每实例 state)
  • agents.defaults.workspace(workspace 隔离)
  • gateway.port(唯一端口)
快速配置(推荐):
  • 每实例使用 openclaw --profile <name> …(自动创建 ~/.openclaw-<name>)。
  • 在每个 profile 的 config 中设置唯一 gateway.port(或手动运行时传 --port)。
  • 安装每 profile 的服务:openclaw --profile <name> gateway install
Profiles 还会给服务名加后缀(bot.molt.<profile>;旧的 com.openclaw.*openclaw-gateway-<profile>.serviceOpenClaw Gateway (<profile>))。 完整指南:Multiple gateways

What does invalid handshake code 1008 mean

Gateway 是一个 WebSocket server,它期望第一条消息是 connect 帧。如果收到其它内容,就会以 code 1008(策略违规)关闭连接。 常见原因:
  • 你在浏览器中打开了 HTTP URL(http://...),而不是 WS 客户端。
  • 使用了错误的端口或路径。
  • 代理/隧道剥离了 auth headers,或发送了非 Gateway 的请求。
快速修复:
  1. 使用 WS URL:ws://<host>:18789(或 HTTPS 时用 wss://...)。
  2. 不要用普通浏览器标签页打开 WS 端口。
  3. 若开启了 auth,在 connect 帧中带上 token/password。
如果你使用 CLI 或 TUI,URL 应类似:
openclaw tui --url ws://<host>:18789 --token <token>
协议细节:Gateway protocol

Logging and debugging

Where are logs

文件日志(结构化):
/tmp/openclaw/openclaw-YYYY-MM-DD.log
你可以通过 logging.file 设置稳定路径。文件日志级别由 logging.level 控制。控制台详细度由 --verboselogging.consoleLevel 控制。 最快的日志跟随:
openclaw logs --follow
服务/监督进程日志(Gateway 通过 launchd/systemd 运行时):
  • macOS:$OPENCLAW_STATE_DIR/logs/gateway.loggateway.err.log(默认 ~/.openclaw/logs/...;profiles 使用 ~/.openclaw-<profile>/logs/...
  • Linux:journalctl --user -u openclaw-gateway[-<profile>].service -n 200 --no-pager
  • Windows:schtasks /Query /TN "OpenClaw Gateway (<profile>)" /V /FO LIST
更多信息见 Troubleshooting

How do I startstoprestart the Gateway service

使用 gateway helpers:
openclaw gateway status
openclaw gateway restart
如果你是手动运行 gateway,openclaw gateway --force 可以抢占端口。见 Gateway

I closed my terminal on Windows how do I restart OpenClaw

Windows 有两种安装模式 **1) WSL2(推荐):**Gateway 运行在 Linux 内。 打开 PowerShell,进入 WSL,然后重启:
wsl
openclaw gateway status
openclaw gateway restart
如果你从未安装过服务,可以前台启动:
openclaw gateway run
**2) 原生 Windows(不推荐):**Gateway 直接运行在 Windows 上。 打开 PowerShell 并运行:
openclaw gateway status
openclaw gateway restart
如果是手动运行(无服务),使用:
openclaw gateway run
Docs: Windows (WSL2), Gateway service runbook

The Gateway is up but replies never arrive What should I check

先做一轮快速健康检查:
openclaw status
openclaw models status
openclaw channels status
openclaw logs --follow
常见原因:
  • Gateway 主机上未加载模型 auth(看 models status)。
  • 频道的 pairing/allowlist 阻止了回复(检查频道配置 + 日志)。
  • WebChat/Dashboard 没有正确 token。
如果你是远程模式,确认隧道/Tailscale 连接可用,并且 Gateway WebSocket 可达。 Docs: Channels, Troubleshooting, Remote access

Disconnected from gateway no reason what now

这通常意味着 UI 丢了 WebSocket 连接。检查:
  1. Gateway 在运行吗?openclaw gateway status
  2. Gateway 健康吗?openclaw status
  3. UI 有没有正确 token?openclaw dashboard
  4. 如果是远程,隧道/Tailscale 连接是否正常?
然后跟随日志:
openclaw logs --follow
Docs: Dashboard, Remote access, Troubleshooting

Telegram setMyCommands fails with network errors What should I check

先看日志和频道状态:
openclaw channels status
openclaw channels logs --channel telegram
如果你在 VPS 或代理后面,确认出站 HTTPS 允许且 DNS 正常。 如果 Gateway 是远程的,确保你在 Gateway 主机上看日志。 Docs: Telegram, Channel troubleshooting

TUI shows no output What should I check

先确认 Gateway 可达且 agent 能运行:
openclaw status
openclaw models status
openclaw logs --follow
在 TUI 中用 /status 查看当前状态。如果你期待在聊天频道收到回复,确保开启投递(/deliver on)。 Docs: TUI, Slash commands

How do I completely stop then start the Gateway

如果你安装了服务:
openclaw gateway stop
openclaw gateway start
这会停止/启动受监督的服务(macOS 上是 launchd,Linux 上是 systemd)。当 Gateway 作为守护进程在后台运行时用这个。 如果你是前台运行,Ctrl‑C 停止,然后:
openclaw gateway run
Docs: Gateway service runbook

ELI5 openclaw gateway restart vs openclaw gateway

  • openclaw gateway restart:重启后台服务(launchd/systemd)。
  • openclaw gateway:在本终端前台运行 gateway。
如果你安装了服务,就用 gateway 命令;要临时前台跑一遍,就用 openclaw gateway

Whats the fastest way to get more details when something fails

--verbose 启动 Gateway 以获得更多控制台信息,然后查看日志文件以排查频道 auth、模型路由和 RPC 错误。

Media & attachments

My skill generated an imagePDF but nothing was sent

Agent 发送附件时,必须在消息里包含一行 MEDIA:<path-or-url>(单独一行)。见 OpenClaw assistant setupAgent send CLI 发送:
openclaw message send --target +15555550123 --message "Here you go" --media /path/to/file.png
还要检查:
  • 目标频道支持外发媒体,且未被 allowlist 阻止。
  • 文件大小在该 provider 的限制内(图片会缩放到最大 2048px)。
Images

Security and access control

Is it safe to expose OpenClaw to inbound DMs

把入站 DMs 视为不可信输入。默认设置旨在降低风险:
  • 支持 DMs 的频道默认是 pairing
    • 未知发送者会收到 pairing 码;bot 不处理其消息。
    • 通过 openclaw pairing approve <channel> <code> 批准。
    • 待处理请求每个频道最多 3 个;若没收到 code,检查 openclaw pairing list <channel>
  • 公开开放 DMs 需要显式 opt‑in(dmPolicy: "open" 且 allowlist 为 "*")。
运行 openclaw doctor 可提示高风险 DM 策略。

Is prompt injection only a concern for public bots

不是。Prompt injection 针对的是不可信内容,不是仅仅谁能 DM 机器人。 如果你的助手会读取外部内容(web search/fetch、浏览器页面、邮件、文档、附件、粘贴的日志),这些内容可能包含试图劫持模型的指令。即便只有你是发送者,也可能发生。 最大风险出现在启用了工具时:模型可能被诱导去泄露上下文或替你调用工具。降低影响半径的方法:
  • 用只读或禁工具的“reader” agent 来总结不可信内容
  • 对启用工具的 agent 关闭 web_search / web_fetch / browser
  • 开启 sandbox 并使用严格的工具 allowlist
细节见 Security

Should my bot have its own email GitHub account or phone number

对大多数配置来说,应该有独立账号。把 bot 隔离为单独账号/手机号能降低出问题时的影响范围,也更便于轮换凭据或撤销权限,而不影响你的个人账号。 先从小范围开始,只授予真正需要的工具和账号,后续再扩展。 Docs: Security, Pairing

Can I give it autonomy over my text messages and is that safe

我们不建议让它对你的个人消息完全自治。更安全的模式是:
  • 保持 DMs 在 pairing 模式或严格 allowlist。
  • 如果要让它代发消息,使用独立号码或账号
  • 让它先拟稿,然后批准后发送
如果要实验,请用专用账号并保持隔离。见 Security

Can I use cheaper models for personal assistant tasks

可以,前提是该 agent 仅聊天且输入可信。更小的模型更容易被指令劫持,所以不适合启用工具的 agent,或读取不可信内容时使用。 如果必须用小模型,务必锁紧工具并在 sandbox 中运行。见 Security

I ran start in Telegram but didnt get a pairing code

只有未知发送者发消息且 dmPolicy: "pairing" 启用时才会发送 pairing 码。单独的 /start 不会生成 code。 查看待处理请求:
openclaw pairing list telegram
如果你想立即访问,allowlist 你的 sender id,或对该账号设置 dmPolicy: "open"

WhatsApp will it message my contacts How does pairing work

不会。WhatsApp 默认 DM 策略是 pairing。未知发送者只会收到 pairing 码,其消息不会被处理。OpenClaw 只会回复它收到的聊天消息,或你显式触发的发送。 批准 pairing:
openclaw pairing approve whatsapp <code>
查看待处理请求:
openclaw pairing list whatsapp
向导中的手机号提示用于设置你的 allowlist/owner,以允许你自己的 DMs。它不会用于自动发送。如果你用个人 WhatsApp 号码运行,使用该号码并启用 channels.whatsapp.selfChatMode

Chat commands, aborting tasks, and “it won’t stop”

How do I stop internal system messages from showing in chat

多数内部或工具消息只会在该会话启用 verbosereasoning 时显示。 在出现的聊天中这样修复:
/verbose off
/reasoning off
如果仍然很吵,检查 Control UI 的会话设置并将 verbose 设为 inherit。同时确认你没有在 config 中使用 verboseDefaulton 的 bot profile。 Docs: Thinking and verbose, Security

How do I stopcancel a running task

发送以下任一独立消息(不带斜杠):
stop
abort
esc
wait
exit
interrupt
这些是中止触发词(不是 slash commands)。 对于后台进程(来自 exec 工具),可以让 agent 运行:
process action:kill sessionId:XXX
Slash commands 总览见 Slash commands 大多数命令必须以 / 开头并作为单独消息发送,但少数快捷指令(如 /status)在允许的发送者下也可内联。

How do I send a Discord message from Telegram Crosscontext messaging denied

OpenClaw 默认阻止跨 provider消息。如果工具调用绑定在 Telegram,就不会发送到 Discord,除非你明确允许。 为 agent 启用跨 provider 消息:
{
  agents: {
    defaults: {
      tools: {
        message: {
          crossContext: {
            allowAcrossProviders: true,
            marker: { enabled: true, prefix: "[from {channel}] " }
          }
        }
      }
    }
  }
}
编辑 config 后重启 gateway。如果只想对单个 agent 生效,把它放在 agents.list[].tools.message 下。

Why does it feel like the bot ignores rapidfire messages

队列模式决定新消息如何与进行中的运行交互。用 /queue 切换模式:
  • steer - 新消息会重定向当前任务
  • followup - 逐条处理
  • collect - 批量收集后一次回复(默认)
  • steer-backlog - 先 steer,再处理积压
  • interrupt - 中止当前运行并重新开始
你还可以为 followup 模式添加选项,如 debounce:2s cap:25 drop:summarize

Answer the exact question from the screenshot/chat log

Q: “What’s the default model for Anthropic with an API key?” A: 在 OpenClaw 中,凭据和模型选择是分开的。设置 ANTHROPIC_API_KEY(或在 auth profiles 中存储 Anthropic API key)只会启用认证;真正的默认模型由你在 agents.defaults.model.primary 中配置(例如 anthropic/claude-sonnet-4-5anthropic/claude-opus-4-5)。如果你看到 No credentials found for profile "anthropic:default",说明 Gateway 在运行该 agent 的预期 auth-profiles.json 中找不到 Anthropic 凭据。
还卡住?去 Discord 或开一个 GitHub discussion