🔥 最常见问题 TOP 9Most Common Questions TOP 9

什么是OpenClaw?What is OpenClaw?
开源个人AI助手平台,GitHub 32.1万+星标(321k),支持本地部署和多平台集成。Open-source personal AI assistant platform, GitHub 321k+ Stars, supports local deployment and multi-platform integration.
如何在Windows上安装?How to install on Windows?
一键安装:iwr -useb https://openclaw.ai/install.ps1 | iexOne-click install: iwr -useb https://openclaw.ai/install.ps1 | iex
如何获取DeepSeek API Key?How to get DeepSeek API Key?
访问 platform.deepseek.com 注册并创建,成本约5-50元/月。Visit platform.deepseek.com to register and create, cost approx 5-50 CNY/month.
如何接入飞书?How to connect Feishu?
创建飞书应用、配置权限、设置事件订阅、获取凭证。Create Feishu app, configure permissions, set up event subscriptions, get credentials.
有哪些安全风险?What are the security risks?
已修复82+个已知漏洞,建议升级到最新版本2026.3.13并使用Docker隔离。82+ known vulnerabilities fixed. Upgrade to latest version 2026.3.13 and use Docker isolation.
如何升级OpenClaw?How to upgrade OpenClaw?
一键升级:openclaw update,建议升级到最新版本。One-click upgrade: openclaw update. Recommended to upgrade to latest version.
最新版本新功能?What's new in the latest version?
2026.3.13版本:安全更新、Chrome DevTools MCP、性能优化、漏洞修复。Version 2026.3.13: Security updates, Chrome DevTools MCP, performance optimization, bug fixes.
各模型成本对比?Model cost comparison?
DeepSeek最便宜5-30元/月,比ChatGPT Plus节省73%-96%。DeepSeek is cheapest at 5-30 CNY/month, saving 73%-96% vs ChatGPT Plus.
Gateway无法启动?Gateway won't start?
检查端口占用、查看日志、运行诊断、尝试更换端口。Check port usage, view logs, run diagnostics, try changing port.
💡

一、基础概念篇1. Basic Concepts

Q1: 什么是OpenClaw?Q1: What is OpenClaw?

A: OpenClaw(曾用名 Clawdbot、Moltbot,社区昵称"小龙虾")是一款开源的个人AI助手平台,于2026年初在GitHub上迅速走红,截至2026年3月14日最新版本2026.3.13发布时已获得32.1万+星标(321k Stars),超越React、Linux内核,成为历史增速最快的开源项目之一。全球部署实例超300万+,月增近百万,国内用户突破15万+。它能够在用户自己的设备上本地运行,通过WhatsApp、Telegram、Discord、飞书、钉钉等多种通讯平台与用户交互,实现从简单的对话问答到复杂的自动化任务执行。

核心定位:OpenClaw的定位是"Your own personal AI assistant"——一个真正属于用户的、可深度定制的AI助手,而非依赖云服务的第三方产品。

发展历程

  • 2026年1月:以Clawdbot名称爆火开发者社区
  • 2026年1月27日:因商标顾虑更名为Moltbot(过渡名)
  • 2026年1月30日:正式定名OpenClaw
  • 2026年3月1日:以25.2万颗星登顶GitHub软件星标历史第一
  • 2026年3月:GitHub星标突破32.1万(321k Stars),全球部署超300万+

A: OpenClaw (formerly known as Clawdbot and Moltbot, community nickname "Crawfish") is an open-source personal AI assistant platform that gained rapid popularity on GitHub in early 2026. As of March 14, 2026 when version 2026.3.13 was released, it had achieved 321k+ Stars, surpassing React and the Linux kernel to become one of the fastest-growing open-source projects in history. With over 3 million global deployments, nearly 1 million monthly new users, and over 150k users in China, it runs locally on users' own devices and communicates through platforms like WhatsApp, Telegram, Discord, Feishu, and DingTalk, enabling everything from simple Q&A to complex automated task execution.

Core Positioning: OpenClaw's positioning is "Your own personal AI assistant" — a truly personal, deeply customizable AI assistant, rather than a third-party cloud-dependent product.

Development History:

  • January 2026: Gained explosive popularity in the developer community as Clawdbot
  • January 27, 2026: Renamed to Moltbot (interim name) due to trademark concerns
  • January 30, 2026: Officially named OpenClaw
  • March 1, 2026: Reached #1 in GitHub software stars history with 252k stars
  • March 2026: GitHub stars exceeded 321k, with over 3 million global deployments
Q2: OpenClaw与ChatGPT/Claude网页版有什么区别?Q2: What is the difference between OpenClaw and ChatGPT/Claude web versions?

A: 两者存在本质差异:

特性 OpenClaw ChatGPT/Claude网页版
部署方式本地/云端在线服务
数据隐私✅ 完全掌控⚠ 上传到服务器
本地文件访问✅ 支持❌ 不支持
系统操作✅ 支持❌ 不支持
功能扩展✅ Skills系统❌ 固定功能
成本按需付费订阅制
多平台集成✅ 支持⚠ 有限
💡
简单来说

ChatGPT/Claude:像在网吧上网,方便但受限
OpenClaw:像自己的电脑,自由但需要配置

A: There are fundamental differences between the two:

Feature OpenClaw ChatGPT/Claude Web Version
DeploymentLocal/CloudOnline Service
Data Privacy✅ Full Control⚠ Uploaded to Server
Local File Access✅ Supported❌ Not Supported
System Operations✅ Supported❌ Not Supported
Function Extension✅ Skills System❌ Fixed Features
CostPay-as-you-goSubscription
Multi-platform Integration✅ Supported⚠ Limited
💡
In Simple Terms

ChatGPT/Claude: Like using an internet cafe - convenient but restricted
OpenClaw: Like your own computer - free but requires configuration

Q3: OpenClaw的核心能力有哪些?Q3: What are the core capabilities of OpenClaw?

A: 五大核心能力:

  1. 🏠 本地部署:在你的电脑上运行,数据隐私有保障
  2. 📁 文件访问:可以搜索、读取、编辑你电脑上的文件
  3. 🔌 无限扩展:通过Skills(技能)系统扩展功能
  4. 💬 多平台使用:支持飞书、企微、钉钉、QQ、Telegram、Discord等
  5. 💰 成本可控:使用自己的API,费用透明

A: Five Core Capabilities:

  1. 🏠 Local Deployment: Runs on your computer with guaranteed data privacy
  2. 📁 File Access: Can search, read, and edit files on your computer
  3. 🔌 Unlimited Extension: Expand functionality through the Skills system
  4. 💬 Multi-platform Usage: Supports Feishu, WeCom, DingTalk, QQ, Telegram, Discord, etc.
  5. 💰 Controllable Cost: Use your own API with transparent pricing
Q4: OpenClaw适合哪些人群?Q4: Who is OpenClaw suitable for?

A: ✅ 强烈推荐OpenClaw的人群:

  1. 超级个体/自由职业者 - 需要一个人顶一个团队
  2. 知识工作者 - 需要管理大量文档和知识
  3. 程序员 - 需要代码辅助和自动化
  4. 内容创作者 - 需要素材管理和多平台发布
  5. 注重隐私者 - 不想上传敏感文件到云端
  6. 成本敏感者 - 想要高性价比的AI助手
⚠️
注意

如果你只需要简单对话,可以考虑ChatGPT Plus或Claude Pro。

A: ✅ Strongly recommended for OpenClaw:

  1. Super Individual/Freelancer - Need one person to do the work of a team
  2. Knowledge Worker - Need to manage large amounts of documents and knowledge
  3. Programmers - Need code assistance and automation
  4. Content Creator - Need material management and multi-platform publishing
  5. Privacy-conscious Users - Don't want to upload sensitive files to the cloud
  6. Cost-conscious Users - Want cost-effective AI assistant
⚠️
Note

If you only need simple conversations, consider ChatGPT Plus or Claude Pro.

Q5: OpenClaw的工作原理是什么?Q5: How does OpenClaw work?

A: OpenClaw通过Gateway网关将聊天应用连接到AI智能体。Gateway是会话、路由和渠道连接的唯一事实来源。

核心组件

  1. Gateway网关 - 连接各个聊天平台,管理会话和消息路由,默认地址:http://127.0.0.1:18789/
  2. AI智能体 - 支持Claude、GPT、Gemini、DeepSeek、Kimi等多种模型
  3. Skills技能系统 - 文件管理、知识管理、自动化等,可自定义开发
  4. ClawHub - 技能市场,可以下载和分享Skills

A: OpenClaw connects chat applications to AI agents through the Gateway. Gateway is the single source of truth for sessions, routing, and channel connections.

Core Components:

  1. Gateway - Connects various chat platforms, manages sessions and message routing, default address: http://127.0.0.1:18789/
  2. AI Agent - Supports Claude, GPT, Gemini, DeepSeek, Kimi and other models
  3. Skills System - File management, knowledge management, automation, etc., customizable development
  4. ClawHub - Skills marketplace, can download and share Skills
Q6: OpenClaw与其他AI工具能力对比如何?Q6: How does OpenClaw compare with other AI tools?

A:

能力维度 OpenClaw Claude Code Cursor ChatGPT
任务规划⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
自动执行⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
自我修复⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
工程级操作⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
本地自动化⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
代码质量⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
易用性⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐

核心差异

  • OpenClaw:强在任务规划和自动执行完整工程流程
  • Claude Code/Cursor:强在代码质量与理解
  • ChatGPT:强在对话体验和易用性

A:

Capability OpenClaw Claude Code Cursor ChatGPT
Task Planning⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
Auto Execution⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
Self-healing⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
Engineering-level Operations⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
Local Automation⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
Code Quality⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐
Usability⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐⭐

Key Differences:

  • OpenClaw: Excels in task planning and automated execution of complete engineering workflows
  • Claude Code/Cursor: Excel in code quality and understanding
  • ChatGPT: Excels in conversation experience and usability
📦

二、安装部署篇2. Installation & Deployment

Q7: OpenClaw的系统要求是什么?Q7: What are the system requirements for OpenClaw?

A:

硬件要求

配置项 最低要求 推荐配置 高性能配置
CPU2核4核8核+
内存4GB8GB16GB+
存储10GB50GB SSD100GB+ SSD
网络1Mbps10Mbps100Mbps+

软件依赖

  • Node.js:版本22或更高
  • npm:Node.js包管理器
  • Git:版本控制工具(可选)
  • Docker:容器化部署(可选)

系统支持

  • Windows 10及以上版本
  • macOS 12及以上版本
  • 主流Linux发行版

A:

Hardware Requirements:

Component Minimum Recommended High Performance
CPU2 cores4 cores8+ cores
Memory4GB8GB16GB+
Storage10GB50GB SSD100GB+ SSD
Network1Mbps10Mbps100Mbps+

Software Dependencies:

  • Node.js: Version 22 or higher
  • npm: Node.js package manager
  • Git: Version control tool (optional)
  • Docker: Containerized deployment (optional)

Supported Systems:

  • Windows 10 and above
  • macOS 12 and above
  • Major Linux distributions
Q8: 如何在Mac上安装OpenClaw?Q8: How to install OpenClaw on Mac?

A: Mac本地部署(推荐):

第一步:打开终端
按 Command + 空格 打开Spotlight,输入Terminal,按回车

第二步:安装OpenClaw

curl -fsSL https://openclaw.ai/install.sh | bash

第三步:验证安装

openclaw --version

第四步:初始化配置

openclaw onboard --install-daemon

第五步:验证Gateway状态

openclaw channels status

A: Mac Local Deployment (Recommended):

Step 1: Open Terminal
Press Command + Space to open Spotlight, type Terminal, press Enter

Step 2: Install OpenClaw

curl -fsSL https://openclaw.ai/install.sh | bash

Step 3: Verify Installation

openclaw --version

Step 4: Initialize Configuration

openclaw onboard --install-daemon

Step 5: Verify Gateway Status

openclaw channels status
Q9: 如何在Windows上安装OpenClaw?Q9: How to install OpenClaw on Windows?

A: Windows有多种部署方式:

方式一:一键安装(推荐,90%人选这个)

  1. 左下角搜索 PowerShell,右键以管理员身份运行
  2. 复制粘贴下面命令,回车:
iwr -useb https://openclaw.ai/install.ps1 | iex

方式二:WSL2 + Ubuntu部署

# 启用WSL功能
dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart
dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart

# 设置WSL 2为默认版本
wsl --set-default-version 2

# 重启计算机后,从Microsoft Store安装Ubuntu 22.04 LTS

# 在Ubuntu终端中执行
curl -fsSL https://openclaw.ai/install.sh | bash

方式三:PowerShell原生部署

# 以管理员身份运行PowerShell

# 方法1: 使用npm安装
npm install -g openclaw@latest

# 方法2: 使用pnpm安装(推荐)
pnpm add -g openclaw@latest
pnpm approve-builds -g        # 批准构建脚本

# 方法3: 安装openclaw
npm install -g openclaw@latest

A: Windows has multiple deployment options:

Method 1: One-click Installation (Recommended, 90% choose this)

  1. Search for PowerShell in the lower left corner, right-click and run as administrator
  2. Copy and paste the command below, press Enter:
iwr -useb https://openclaw.ai/install.ps1 | iex

Method 2: WSL2 + Ubuntu Deployment

# Enable WSL feature
dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart
dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart

# Set WSL 2 as default version
wsl --set-default-version 2

# After restarting, install Ubuntu 22.04 LTS from Microsoft Store

# Execute in Ubuntu terminal
curl -fsSL https://openclaw.ai/install.sh | bash

Method 3: PowerShell Native Deployment

# Run PowerShell as administrator

# Method 1: Install using npm
npm install -g openclaw@latest

# Method 2: Install using pnpm (Recommended)
pnpm add -g openclaw@latest
pnpm approve-builds -g        # Approve build scripts

# Method 3: Install openclaw
npm install -g openclaw@latest
Q10: 如何在Linux上安装OpenClaw?Q10: How to install OpenClaw on Linux?

A: Linux部署步骤:

# Ubuntu/Debian安装Node.js
curl -fsSL https://deb.nodesource.com/setup_24.x | sudo -E bash -
sudo apt-get install -y nodejs

# 安装OpenClaw
curl -fsSL https://openclaw.ai/install.sh | bash

# 验证安装
openclaw --version

# 初始化配置
openclaw onboard --install-daemon

A: Linux Deployment Steps:

# Ubuntu/Debian: Install Node.js
curl -fsSL https://deb.nodesource.com/setup_24.x | sudo -E bash -
sudo apt-get install -y nodejs

# Install OpenClaw
curl -fsSL https://openclaw.ai/install.sh | bash

# Verify Installation
openclaw --version

# Initialize Configuration
openclaw onboard --install-daemon
Q11: 如何使用Docker部署OpenClaw?Q11: How to deploy OpenClaw using Docker?

A: Docker部署步骤:

方式一:一键脚本部署(推荐新手)

curl -fsSL https://clawd.org.cn/install.sh | bash

方式二:手动Docker Compose部署

创建 docker-compose.yml

services:
  openclaw-gateway:
    image: jiulingyun803/openclaw:latest
    user: node:node
    environment:
      HOME: /home/node
      OPENCLAW_GATEWAY_TOKEN: your-secure-token-here
    volumes:
      - ./data/.openclaw:/home/node/.openclaw
      - ./data/clawd:/home/node/clawd
    ports:
      - "18789:18789"
    restart: unless-stopped
    command: ["node", "dist/index.js", "gateway", "--port", "18789"]

启动服务:

docker compose up -d

A: Docker Deployment Steps:

Method 1: One-click Script Deployment (Recommended for Beginners)

curl -fsSL https://clawd.org.cn/install.sh | bash

Method 2: Manual Docker Compose Deployment

Create docker-compose.yml:

services:
  openclaw-gateway:
    image: jiulingyun803/openclaw:latest
    user: node:node
    environment:
      HOME: /home/node
      OPENCLAW_GATEWAY_TOKEN: your-secure-token-here
    volumes:
      - ./data/.openclaw:/home/node/.openclaw
      - ./data/clawd:/home/node/clawd
    ports:
      - "18789:18789"
    restart: unless-stopped
    command: ["node", "dist/index.js", "gateway", "--port", "18789"]

Start service:

docker compose up -d
Q13: 国内用户如何快速安装?Q13: How can users in China install quickly?

A: 国内一键安装(推荐):

macOS/Linux

curl -fsSL https://clawd.org.cn/install.sh | bash

Windows PowerShell

iwr -useb https://clawd.org.cn/install.ps1 | iex

国内版优势

  • ⚡ 速度快:使用国内镜像源
  • 🇨🇳 中文友好:完整中文界面
  • 📦 一键安装:自动配置所有依赖
  • 💰 成本优化:默认配置国产模型

A: China One-click Installation (Recommended):

macOS/Linux:

curl -fsSL https://clawd.org.cn/install.sh | bash

Windows PowerShell:

iwr -useb https://clawd.org.cn/install.ps1 | iex

China Version Advantages:

  • ⚡ Fast Speed: Uses domestic mirror sources
  • 🇨🇳 Chinese-friendly: Complete Chinese interface
  • 📦 One-click Installation: Automatically configures all dependencies
  • 💰 Cost Optimization: Default configuration with domestic models
Q14: 如何升级OpenClaw?Q14: How to upgrade OpenClaw?

A: 升级方式:

方式一:一键升级(推荐)

openclaw update

方式二:npm直接升级

# 备份配置
cp -r ~/.openclaw ~/.openclaw.backup-$(date +%Y%m%d)

# 停止Gateway
openclaw gateway stop

# 卸载旧版本
npm uninstall -g openclaw

# 安装新版本
npm i -g openclaw@latest

# 修复配置
openclaw doctor --fix

# 重启Gateway
openclaw gateway restart
⚠️
重要提示

建议升级到2026.3.13或更高版本,该版本修复了82+个安全漏洞,包括ClawJacked等关键漏洞。

A: Upgrade Methods:

Method 1: One-click Upgrade (Recommended)

openclaw update

Method 2: Direct npm Upgrade

# Backup configuration
cp -r ~/.openclaw ~/.openclaw.backup-$(date +%Y%m%d)

# Stop Gateway
openclaw gateway stop

# Uninstall old version
npm uninstall -g openclaw

# Install new version
npm i -g openclaw@latest

# Fix configuration
openclaw doctor --fix

# Restart Gateway
openclaw gateway restart
⚠️
Important Notice

It is recommended to upgrade to version 2026.3.13 or higher, which fixes 82+ security vulnerabilities including ClawJacked and other critical vulnerabilities.

Q15: 如何使用引导向导进行配置?Q15: How to use the setup wizard for configuration?

A: 引导向导是在 macOS、Linux 或 Windows(通过 WSL2;强烈推荐)上设置 OpenClaw 的推荐方式。它在一个引导流程中配置本地网关或远程网关连接,以及频道、技能和工作区默认设置。

主要入口点

openclaw onboard

最快首次聊天:打开控制 UI(无需频道设置):

openclaw dashboard

后续重新配置

openclaw configure

A: The setup wizard is the recommended way to set up OpenClaw on macOS, Linux, or Windows (via WSL2; strongly recommended). It configures local or remote gateway connections, along with channel, skill, and workspace defaults in a guided flow.

Main Entry Points:

openclaw onboard

Fastest First Chat: Open control UI (no channel setup needed):

openclaw dashboard

Reconfigure Later:

openclaw configure
Q17b: 引导向导支持哪些模型/认证方式?Q17b: What models/authentication methods does the wizard support?

A: 支持的模型/认证方式:

认证方式 说明
Anthropic API 密钥(推荐)使用 ANTHROPIC_API_KEY 环境变量或提示输入
Anthropic OAuthClaude Code CLI,macOS 钥匙串或 ~/.claude/.credentials.json
OpenAI API 密钥使用 OPENAI_API_KEY 环境变量或提示输入
Gemini使用 Gemini API 密钥
MiniMax M2.1配置自动写入
Moonshot(Kimi K2)配置自动写入
Sonnet 4.6Anthropic最新模型,支持100万token上下文

特别说明:2026.2.17版本已集成Anthropic最新发布的Sonnet 4.6模型,开放100万token(1M)上下文窗口测试。

A: Supported Models/Authentication Methods:

Authentication Description
Anthropic API Key (Recommended)Uses ANTHROPIC_API_KEY environment variable or prompts for input
Anthropic OAuthClaude Code CLI, macOS Keychain, or ~/.claude/.credentials.json
OpenAI API KeyUses OPENAI_API_KEY environment variable or prompts for input
GeminiUses Gemini API Key
MiniMax M2.1Configuration auto-written
Moonshot (Kimi K2)Configuration auto-written
Sonnet 4.6Anthropic's latest model, supports 1M token context window

Special Note: Version 2026.2.17 integrates Anthropic's latest Sonnet 4.6 model, opening 1M token (1M) context window testing.

Q83: 如何检查网关状态?Q83: How to check gateway status?

A: 如果您安装了服务,检查运行状态:

openclaw gateway status

或使用通用命令:

openclaw channels status

A: If you have installed the service, check the running status:

openclaw gateway status

Or use the generic command:

openclaw channels status
Q84: 如何打开控制UI?Q84: How to open the control UI?

A: 运行以下命令打开控制UI:

openclaw dashboard

或直接访问网关地址:http://127.0.0.1:18789/

成功

如果控制 UI 加载成功,您的网关已准备就绪,可以开始聊天了。

A: Run the following command to open the control UI:

openclaw dashboard

Or directly visit the gateway address: http://127.0.0.1:18789/

Success

If the control UI loads successfully, your gateway is ready and you can start chatting.

Q85: 如何在前台运行网关(用于测试)?Q85: How to run gateway in foreground (for testing)?

A: 适用于快速测试或故障排除:

openclaw gateway --port 18789

按 Ctrl+C 可停止前台运行。

A: Suitable for quick testing or troubleshooting:

openclaw gateway --port 18789

Press Ctrl+C to stop the foreground process.

Q86: 有哪些有用的环境变量?Q86: What are useful environment variables?

A: 如果您将 OpenClaw 作为服务账户运行或想要自定义配置/状态位置:

环境变量 说明
OPENCLAW_HOME设置用于内部路径解析的主目录
OPENCLAW_STATE_DIR覆盖状态目录
OPENCLAW_CONFIG_PATH覆盖配置文件路径

A: If you run OpenClaw as a service account or want to customize config/state locations:

Environment Variable Description
OPENCLAW_HOMESets the home directory for internal path resolution
OPENCLAW_STATE_DIROverrides the state directory
OPENCLAW_CONFIG_PATHOverrides the config file path
Q88: 安装完成后我拥有什么?Q88: What do I have after installation?

A: 安装完成后您将拥有:

  • ✅ 正在运行的网关
  • ✅ 已配置的认证
  • ✅ 控制 UI 访问权限或已连接的频道

A: After installation you will have:

  • ✅ A running gateway
  • ✅ Configured authentication
  • ✅ Control UI access or connected channels
Q89: 安装后的下一步该做什么?Q89: What are the next steps after installation?

A: 建议的下一步操作:

A: Recommended next steps:

⚙️

三、配置设置篇3. Configuration & Settings

Q16: 如何配置AI模型API?Q16: How to configure AI model API?

A: OpenClaw支持两种API模型配置:

1. 内置API模型(推荐新手)

支持的内置模型:

  • 国内模型:Kimi、DeepSeek、智谱GLM、通义千问、MiniMax、百度文心、字节豆包
  • 国外模型:OpenAI GPT、Anthropic Claude、Google Gemini、Groq

2. 自定义API(进阶用户)

配置文件位置:~/.openclaw/openclaw.json

A: OpenClaw supports two types of API model configurations:

1. Built-in API Models (recommended for beginners)

Supported built-in models:

  • Chinese models: Kimi, DeepSeek, Zhipu GLM, Tongyi Qianwen, MiniMax, Baidu Wenxin, ByteDance Doubao
  • International models: OpenAI GPT, Anthropic Claude, Google Gemini, Groq

2. Custom API (for advanced users)

Config file location: ~/.openclaw/openclaw.json

Q17: 如何获取DeepSeek API Key?Q17: How to get a DeepSeek API Key?

A: DeepSeek配置步骤:

  1. 访问 https://platform.deepseek.com/
  2. 注册并登录
  3. 充值账户(建议先充值10元试用)
  4. 点击"API keys" → "创建 API key"
  5. 复制API Key(只显示一次,务必保存)
  6. 在OpenClaw配置向导中选择DeepSeek,粘贴API Key

成本估算

  • 日常使用:5-10元/月
  • 中度使用:10-30元/月
  • 重度使用:30-50元/月

A: DeepSeek configuration steps:

  1. Visit https://platform.deepseek.com/
  2. Register and log in
  3. Top up your account (recommended: top up ¥10 for trial)
  4. Click "API keys" → "Create API key"
  5. Copy the API Key (shown only once, save it carefully)
  6. In OpenClaw's setup wizard, select DeepSeek and paste the API Key

Cost estimate:

  • Light usage: ¥5–10/month
  • Moderate usage: ¥10–30/month
  • Heavy usage: ¥30–50/month
Q18: 如何获取Kimi API Key?Q18: How to get Kimi API Key?

A: Kimi配置步骤:

  1. 访问 https://www.kimi.com/code
  2. 购买优惠套餐plan(推荐49元/月套餐)
  3. 打开控制台,创建API key
  4. 复制API Key并保存
  5. 在OpenClaw配置向导中选择Moonshot AI

特点

  • 📚 超长上下文:支持200万字
  • 📄 长文档处理:论文、报告分析专家
  • 🇨🇳 中文理解好:适合中文场景

A: Kimi Configuration Steps:

  1. Visit https://www.kimi.com/code
  2. Purchase a discounted plan (recommended: 49 CNY/month package)
  3. Open the console and create an API key
  4. Copy and save the API Key
  5. Select Moonshot AI in the OpenClaw configuration wizard

Features:

  • 📚 Ultra-long context: Supports 2 million characters
  • 📄 Long document processing: Expert in analyzing papers and reports
  • 🇨🇳 Excellent Chinese understanding: Ideal for Chinese use cases
Q19: 如何配置阿里云百炼?Q19: How to configure Alibaba Cloud Bailian?

A: 配置步骤:

  1. 登录阿里云控制台 www.aliyun.com
  2. 进入"百炼大模型"服务
  3. 在"密钥管理"中创建API Key
  4. 复制Access Key ID和Access Key Secret

配置文件:

{
  "agents": {
    "defaults": {
      "model": "qwen-max"
    }
  },
  "models": {
    "providers": {
      "aliyun": {
        "apiKey": "your-api-key",
        "baseUrl": "https://dashscope.aliyuncs.com/api/v1"
      }
    }
  }
}

A: Configuration Steps:

  1. Log in to Alibaba Cloud Console at www.aliyun.com
  2. Navigate to "Bailian LLM" service
  3. Create an API Key in "Key Management"
  4. Copy Access Key ID and Access Key Secret

Configuration File:

{
  "agents": {
    "defaults": {
      "model": "qwen-max"
    }
  },
  "models": {
    "providers": {
      "aliyun": {
        "apiKey": "your-api-key",
        "baseUrl": "https://dashscope.aliyuncs.com/api/v1"
      }
    }
  }
}
Q20: 如何配置智谱AI?Q20: How to configure Zhipu AI?

A: 配置步骤:

  1. 访问智谱AI开放平台 open.bigmodel.cn
  2. 注册并创建应用
  3. 在"API密钥"页面获取Key

配置文件:

{
  "agents": {
    "defaults": {
      "model": "zhipu/glm-4.7"
    }
  },
  "models": {
    "providers": {
      "zhipu": {
        "apiKey": "your-api-key",
        "baseUrl": "https://open.bigmodel.cn/api/paas/v4"
      }
    }
  }
}

A: Configuration Steps:

  1. Visit Zhipu AI Open Platform at open.bigmodel.cn
  2. Register and create an application
  3. Get your Key from the "API Key" page

Configuration File:

{
  "agents": {
    "defaults": {
      "model": "zhipu/glm-4.7"
    }
  },
  "models": {
    "providers": {
      "zhipu": {
        "apiKey": "your-api-key",
        "baseUrl": "https://open.bigmodel.cn/api/paas/v4"
      }
    }
  }
}
Q21: 如何配置本地模型(Ollama)?Q21: How to configure local models (Ollama)?

A: 配置步骤:

# 安装Ollama
curl -fsSL https://ollama.com/install.sh | sh

# 下载模型
ollama pull qwen2.5-coder:14b

配置OpenClaw:

{
  "agents": {
    "defaults": {
      "model": "ollama/qwen2.5-coder:14b"
    }
  },
  "models": {
    "providers": {
      "ollama": {
        "baseUrl": "http://localhost:11434/v1"
      }
    }
  }
}

硬件要求

  • 7B参数模型:建议16GB内存
  • 14B参数模型:建议32GB内存
  • 32B参数模型:建议64GB内存

A: Configuration Steps:

# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh

# Download model
ollama pull qwen2.5-coder:14b

Configure OpenClaw:

{
  "agents": {
    "defaults": {
      "model": "ollama/qwen2.5-coder:14b"
    }
  },
  "models": {
    "providers": {
      "ollama": {
        "baseUrl": "http://localhost:11434/v1"
      }
    }
  }
}

Hardware Requirements:

  • 7B parameter model: 16GB RAM recommended
  • 14B parameter model: 32GB RAM recommended
  • 32B parameter model: 64GB RAM recommended
Q22: 配置文件结构是怎样的?Q22: What is the configuration file structure?

A: 配置文件位于 ~/.openclaw/ 目录:

~/.openclaw/
├── openclaw.json          # 主配置文件
├── credentials/           # 凭证存储
│   └── oauth.json
├── agents/               # Agent配置
│   └── default/
│       └── agent/
│           └── auth-profiles.json
├── workspace/            # 工作区
│   ├── SOUL.md          # 人格/语气
│   ├── USER.md          # 偏好设置
│   ├── AGENTS.md        # 指令说明
│   ├── MEMORY.md        # 长期记忆
│   ├── HEARTBEAT.md     # 检查清单
│   ├── IDENTITY.md      # 名称/主题
│   └── BOOT.md          # 启动配置
└── skills/              # 自定义技能

A: Configuration files are located in the ~/.openclaw/ directory:

~/.openclaw/
├── openclaw.json          # Main configuration file
├── credentials/           # Credential storage
│   └── oauth.json
├── agents/               # Agent configuration
│   └── default/
│       └── agent/
│           └── auth-profiles.json
├── workspace/            # Workspace
│   ├── SOUL.md          # Personality/tone
│   ├── USER.md          # User preferences
│   ├── AGENTS.md        # Instruction specifications
│   ├── MEMORY.md        # Long-term memory
│   ├── HEARTBEAT.md     # Checklist
│   ├── IDENTITY.md      # Name/theme
│   └── BOOT.md          # Boot configuration
└── skills/              # Custom skills
Q23: 如何配置人设(Persona)?Q23: How to configure Persona?

A: 通过SOUL.md和USER.md配置:

SOUL.md示例

你不是聊天机器人。你是个靠谱的助手。

## 核心原则
**认真做事,别敷衍。** 少说"好的!马上帮您处理!",直接动手。
**有自己的想法。** 可以不同意,可以有偏好。
**先自己想办法。** 翻文件、看上下文、搜一搜。

## 边界
- **密码永远不说出来。** 看到密码、API key、token,闭嘴就好。
- 隐私的事保密。
- 不确定的对外操作,先问。

## 风格
该简洁时简洁,该详细时详细。
像个靠谱同事说话,不是客服机器人。

A: Configure through SOUL.md and USER.md:

SOUL.md Example:

You're not a chatbot. You're a reliable assistant.

## Core Principles
**Take your work seriously.** Don't just say "Sure! I'll handle it right away!" — actually do it.
**Have your own thoughts.** It's okay to disagree and have preferences.
**Try to figure it out yourself first.** Search files, check context, look things up.

## Boundaries
- **Never reveal passwords.** If you see a password, API key, or token, keep quiet.
- Keep private matters confidential.
- Ask before taking uncertain external actions.

## Style
Be concise when needed, detailed when needed.
Talk like a reliable colleague, not a customer service bot.
Q24: 如何切换模型?Q24: How to switch models?

A: 切换模型方法:

# 临时切换
# 临时切换模型(仅当前对话)
openclaw chat --model deepseek/deepseek-chat

# 永久切换:修改配置文件中的primary字段
openclaw config set agents.defaults.model.primary "deepseek/deepseek-chat"

# 重启生效
openclaw gateway restart

A: How to switch models:

# Temporary switch
# Switch model for current conversation only
openclaw chat --model deepseek/deepseek-chat

# Permanent switch: modify the primary field in config file
openclaw config set agents.defaults.model.primary "deepseek/deepseek-chat"

# Restart to take effect
openclaw gateway restart
🚀

四、功能使用篇4. Features & Usage

Q25: OpenClaw有哪些常用命令?Q25: What are common OpenClaw commands?

A: 常用命令速查表:

命令 功能
openclaw --version查看版本
openclaw status查看状态
openclaw doctor诊断修复
openclaw onboard --install-daemon配置向导
openclaw dashboard打开控制面板
openclaw update一键升级
openclaw gateway start启动网关
openclaw gateway stop停止网关
openclaw gateway restart重启网关
openclaw logs --follow查看日志
openclaw models list模型列表
openclaw skills list技能列表

A: Common Commands Quick Reference:

Command Function
openclaw --versionCheck version
openclaw statusCheck status
openclaw doctorDiagnose and fix
openclaw onboard --install-daemonConfiguration wizard
openclaw dashboardOpen control panel
openclaw updateOne-click upgrade
openclaw gateway startStart gateway
openclaw gateway stopStop gateway
openclaw gateway restartRestart gateway
openclaw logs --followView logs
openclaw models listList models
openclaw skills listList skills
Q26: 2026.3.13版本有哪些新功能和重要更新?Q26: What are the new features and important updates in version 2026.3.13?

A: 2026.3.13版本(2026年3月14日发布)是重要的安全和功能更新版本:

🔒
重要安全更新

关键安全修复:修复了ClawJacked漏洞、RCE漏洞等82+个安全漏洞,强烈建议所有用户立即更新!

主要功能更新

1. 安全性大幅提升

  • 设备配对安全升级:使用短期引导令牌,不再嵌入共享凭证
  • 修复ClawJacked漏洞:防止恶意网站劫持AI智能体
  • 修复多个远程代码执行(RCE)漏洞

2. 全新控制面板(Dashboard v2)

  • 重新设计的界面,更直观
  • 一眼看到AI助手正在干什么
  • 显示资源使用情况
  • 显示连接的工具

3. 快速模式(Fast Mode)

  • 新增 /fast 指令
  • 响应速度提升3倍(3秒→1秒)
  • 适合批量处理文件、连续查资料

4. 插件系统大升级

  • 支持Ollama、SGLang、vLLM等本地AI框架
  • 数据不离开电脑,更安全
  • 不用花钱买云端算力,更省钱

更新命令

# 一键更新到最新版本
openclaw update

# 验证版本
openclaw --version  # 应显示 2026.3.13

A: Version 2026.3.13 (released March 14, 2026) is an important security and feature update:

🔒
Important Security Update

Critical Security Fixes: Fixed ClawJacked vulnerability, RCE vulnerabilities, and 82+ other security issues. All users are strongly advised to update immediately!

Major Feature Updates:

1. Significantly Improved Security

  • Device pairing security upgrade: Uses short-term bootstrap tokens instead of embedding shared credentials
  • Fixed ClawJacked vulnerability: Prevents malicious websites from hijacking AI agents
  • Fixed multiple Remote Code Execution (RCE) vulnerabilities

2. Brand New Dashboard (Dashboard v2)

  • Redesigned interface, more intuitive
  • See at a glance what your AI assistant is doing
  • Display resource usage
  • Show connected tools

3. Fast Mode

  • New /fast command
  • Response speed improved 3x (3 seconds to 1 second)
  • Perfect for batch file processing and continuous research

4. Major Plugin System Upgrade

  • Support for local AI frameworks like Ollama, SGLang, vLLM
  • Data stays on your computer, more secure
  • No need to pay for cloud computing, more cost-effective

Update Commands:

# Update to the latest version
openclaw update

# Verify version
openclaw --version  # Should display 2026.3.13
Q27: Chrome DevTools MCP连接模式是什么?Q27: What is Chrome DevTools MCP connection mode?

A: 2026.3.13版本新增功能:

功能说明

  • 直接控制已登录的Chrome浏览器
  • AI能"看到"网页、"点击"按钮、"输入"内容
  • 登录态完美保留,不用再输密码

使用场景

  • 批量清理邮箱(保留订单邮件,删除验证码和营销邮件)
  • 自动填表
  • 批量处理文档

安全机制

  • 操作前需要用户点批准
  • 设备配对码单次使用

A: New feature in version 2026.3.13:

Feature Description:

  • Directly control an already logged-in Chrome browser
  • AI can "see" web pages, "click" buttons, "type" content
  • Login state perfectly preserved, no need to re-enter passwords

Use Cases:

  • Batch clean email (keep order emails, delete verification codes and marketing emails)
  • Auto-fill forms
  • Batch process documents

Security Mechanisms:

  • User approval required before operations
  • Device pairing code is single-use
Q28: 如何进行智能文件搜索?Q28: How to perform smart file search?

A: OpenClaw的智能搜索可以:

  • ✅ 根据文件内容搜索
  • ✅ 理解自然语言描述
  • ✅ 跨文件夹搜索
  • ✅ 智能过滤和排序

搜索命令示例

# 按内容搜索
找一下包含"合同"的PDF文件
搜索所有提到"项目计划"的文档

# 按类型搜索
找出所有的PNG图片
搜索最近下载的PDF文件

# 按时间搜索
找一下最近7天修改的文件

# 组合搜索
找一下最近一周内,包含"发票"的图片文件

A: OpenClaw's intelligent search can:

  • ✅ Search by file content
  • ✅ Understand natural language descriptions
  • ✅ Search across folders
  • ✅ Smart filtering and sorting

Search Command Examples:

# Search by content
Find all PDF files containing "contract"
Search all documents mentioning "project plan"

# Search by type
Find all PNG images
Search recently downloaded PDF files

# Search by time
Find files modified in the last 7 days

# Combined search
Find image files from the past week containing "invoice"
Q29: 如何批量处理文件?Q29: How to batch process files?

A: 批量处理命令示例:

批量读取

读取文档文件夹里所有的PDF文件
提取所有图片中的文字

批量提取信息

从所有发票中提取日期、金额、商家信息
从所有简历中提取姓名、电话、邮箱

批量格式转换

把所有Word文档转换成PDF
把所有PNG图片转换成JPG

A: Batch Processing Command Examples:

Batch Reading:

Read all PDF files in the documents folder
Extract text from all images

Batch Information Extraction:

Extract date, amount, and merchant info from all invoices
Extract name, phone, and email from all resumes

Batch Format Conversion:

Convert all Word documents to PDF
Convert all PNG images to JPG
🔧

五、技能系统篇5. Skills System

Q36: 什么是Skills(技能)系统?Q36: What is the Skills system?

A: Skills是OpenClaw的能力扩展层,让AI助手能够执行特定任务。每个Skill包含:

  • SKILL.md:技能描述文档
  • tools/:工具脚本(可选)
  • prompts/:提示词模板(可选)

特点

  • 零代码开发:无需编程,用自然语言描述即可创建新技能
  • 热重载:修改后立即生效,无需重启服务
  • 生态丰富:ClawHub上已有3000+社区贡献技能
  • 跨平台兼容:遵循AgentSkills标准

A: Skills are OpenClaw's capability extension layer, enabling AI assistants to execute specific tasks. Each Skill contains:

  • SKILL.md: Skill description document
  • tools/: Tool scripts (optional)
  • prompts/: Prompt templates (optional)

Features:

  • Zero-code development: No programming required, create new skills with natural language descriptions
  • Hot reload: Changes take effect immediately without restarting the service
  • Rich ecosystem: 3000+ community-contributed skills on ClawHub
  • Cross-platform compatibility: Follows AgentSkills standard
Q38: 有哪些推荐技能?Q38: What are the recommended skills?

A: 推荐技能列表:

技能 功能 适用场景
serpapi联网搜索获取实时信息
githubGitHub操作代码管理
obsidian笔记管理知识库构建
notionNotion集成团队协作
spotify-player音乐控制娱乐场景
summarize内容摘要信息处理

A: Recommended Skills List:

Skill Function Use Case
serpapiWeb searchGet real-time information
githubGitHub operationsCode management
obsidianNote managementKnowledge base building
notionNotion integrationTeam collaboration
spotify-playerMusic controlEntertainment
summarizeContent summarizationInformation processing
🔗

六、多平台集成篇6. Multi-Platform Integration

Q41: OpenClaw支持哪些通讯平台?Q41: Which communication platforms does OpenClaw support?

A: 支持的通讯平台:

平台 类型 特点
Telegram国际稳定可靠,推荐新手使用
WhatsApp国际用户基数大,全球通用
Discord国际适合团队协作场景
飞书国内字节跳动出品,功能丰富
钉钉国内阿里巴巴出品,企业用户多
企业微信国内微信生态,易于推广

A: Supported communication platforms:

Platform Type Features
TelegramInternationalStable and reliable, recommended for beginners
WhatsAppInternationalLarge user base, globally accessible
DiscordInternationalSuitable for team collaboration scenarios
FeishuChinaBy ByteDance, feature-rich
DingTalkChinaBy Alibaba, popular with enterprise users
WeComChinaWeChat ecosystem, easy to promote
Q43: 如何接入飞书?Q43: How to integrate Feishu?

A: 飞书集成步骤:

步骤1:创建飞书应用

  1. 访问飞书开放平台 open.feishu.cn
  2. 点击"创建企业自建应用"
  3. 选择"机器人"类型
  4. 填写应用名称和描述

步骤2:配置权限
在"权限管理"中添加以下权限:

  • im:message - 接收和发送消息
  • im:message:send_as_bot - 以机器人身份发送消息
  • im:chat:readonly - 读取群聊信息
  • contact:user.employee_id:readonly - 获取用户ID

步骤3:配置事件订阅
在"事件与回调"中:

  1. 选择"长连接订阅方式"
  2. 添加事件:im.message.receive_v1
  3. 保存配置

步骤4:获取凭证
在"凭证与基础信息"中复制App ID和App Secret

步骤5:OpenClaw配置

openclaw configure
# 选择Feishu渠道,输入App ID和App Secret

A: Feishu integration steps:

Step 1: Create a Feishu Application

  1. Visit Feishu Open Platform open.feishu.cn
  2. Click "Create Enterprise Self-built Application"
  3. Select "Bot" type
  4. Fill in the application name and description

Step 2: Configure Permissions
Add the following permissions in "Permission Management":

  • im:message - Receive and send messages
  • im:message:send_as_bot - Send messages as bot
  • im:chat:readonly - Read group chat information
  • contact:user.employee_id:readonly - Get user ID

Step 3: Configure Event Subscription
In "Events & Callbacks":

  1. Select "Long Connection Subscription Method"
  2. Add event: im.message.receive_v1
  3. Save configuration

Step 4: Get Credentials
Copy App ID and App Secret from "Credentials & Basic Info"

Step 5: OpenClaw Configuration

openclaw configure
# Select Feishu channel, enter App ID and App Secret
🎯

七、高级应用篇7. Advanced Applications

Q47: 如何设置定时任务?Q47: How to set up scheduled tasks?

A: 定时任务配置:

编辑crontab

crontab -e

添加定时任务

# 每日选题推送(早上9点)
0 9 * * * /path/to/openclaw run daily-topic-push

# 每日工作日志(晚上11点)
0 23 * * * /path/to/openclaw run daily-summary

# 每周周报(周一早上8点)
0 8 * * 1 /path/to/openclaw run weekly-report

A: Scheduled task configuration:

Edit crontab:

crontab -e

Add scheduled tasks:

# Daily topic push (9 AM)
0 9 * * * /path/to/openclaw run daily-topic-push

# Daily work summary (11 PM)
0 23 * * * /path/to/openclaw run daily-summary

# Weekly report (Monday 8 AM)
0 8 * * 1 /path/to/openclaw run weekly-report
🛠️

八、故障排除篇8. Troubleshooting

Q53: Gateway无法启动怎么办?Q53: What to do if Gateway fails to start?

A: 排查步骤:

  1. 检查端口是否被占用:
    lsof -i :18789
  2. 查看错误日志:
    openclaw logs --follow
  3. 运行诊断:
    openclaw doctor
  4. 尝试更换端口:
    openclaw config set gateway.port 18790
    openclaw gateway restart

A: Troubleshooting steps:

  1. Check if port is already in use:
    lsof -i :18789
  2. View error logs:
    openclaw logs --follow
  3. Run diagnostics:
    openclaw doctor
  4. Try changing the port:
    openclaw config set gateway.port 18790
    openclaw gateway restart
Q54: 模型无响应怎么办?Q54: What to do if the model is unresponsive?

A: 排查步骤:

  1. 检查API Key是否正确
  2. 确认账户余额充足
  3. 测试网络连接
  4. 尝试切换其他模型

A: Troubleshooting steps:

  1. Check if the API Key is correct
  2. Verify account balance is sufficient
  3. Test network connection
  4. Try switching to another model
🔒

九、安全与最佳实践篇9. Security & Best Practices

Q62: OpenClaw有哪些安全风险?Q62: What are the security risks of OpenClaw?

A: 安全风险矩阵:

风险类型 风险等级 描述
远程代码执行暴露的实例可能被攻击者利用执行恶意代码
提示词注入恶意指令可能诱导AI执行危险操作
凭证泄露API Key、Token等敏感信息可能被盗取
供应链攻击第三方技能可能包含恶意代码
⚠️
高危警告

国家信息安全漏洞库(CNNVD)数据显示,自2026年1月至2026年3月,共记录到82个以上与OpenClaw相关的漏洞,其中12个超危(Critical)21个高危(High)。截至2026.3.13版本,大部分关键漏洞已修复,建议立即更新到最新版本。

ClawJacked漏洞(2026年3月9日披露,2026.3.13版本已修复):恶意网站可劫持本地AI智能体,超过17,500个实例暴露在攻击之下。该漏洞已在最新版本中修复,强烈建议立即更新。

A: Security Risk Matrix:

Risk Type Risk Level Description
Remote Code ExecutionHighExposed instances may be exploited by attackers to execute malicious code
Prompt InjectionHighMalicious instructions may induce AI to perform dangerous operations
Credential LeakageHighSensitive information like API Keys and Tokens may be stolen
Supply Chain AttackMediumThird-party skills may contain malicious code
⚠️
High Risk Warning

According to CNNVD (China National Vulnerability Database), from January to March 2026, over 82 OpenClaw-related vulnerabilities were recorded, including 12 Critical and 21 High severity issues. As of version 2026.3.13, most critical vulnerabilities have been fixed. Immediate update to the latest version is recommended.

ClawJacked Vulnerability (disclosed March 9, 2026, fixed in 2026.3.13): Malicious websites could hijack local AI agents, with over 17,500 instances exposed to attacks. This vulnerability has been fixed in the latest version. Immediate update is strongly recommended.

Q62b: 2026.3.13版本有哪些关键安全更新?Q62b: What are the key security updates in version 2026.3.13?

A: 2026.3.13版本(2026年3月14日发布)是关键安全更新版本,包含以下重要修复:

🔒
主要安全修复

1. 设备配对安全升级:将 /pair 和 openclaw qr 设置代码切换为短期引导令牌,不再在聊天或 QR 配对有效载荷中嵌入共享网关凭证,大幅降低凭证泄露风险。

2. ClawJacked漏洞修复:修复了恶意网站可劫持本地AI智能体的严重漏洞(CVE相关),影响超过17,500个实例,强烈建议所有用户立即更新。

3. 多个RCE漏洞修复:修复了远程代码执行等高危漏洞,提升系统整体安全性。

更新建议

  • 公开部署:🔴 立即更新(高危安全修复)
  • 本地测试:🟡 建议24小时内更新
  • 生产环境:🔴 建议在下一个维护窗口立即更新

如何更新

# 一键更新
openclaw update

# 或手动更新
npm i -g openclaw@latest

# 验证版本
openclaw --version  # 应显示 2026.3.13 或更高
💡
兼容性说明

本次更新无破坏性变更,配置自动向后兼容,可以放心更新。

A: Version 2026.3.13 (released March 14, 2026) is a critical security update containing the following important fixes:

🔒
Major Security Fixes

1. Device Pairing Security Upgrade: Switched /pair and openclaw qr setup codes to short-term bootstrap tokens, no longer embedding shared gateway credentials in chat or QR pairing payloads, significantly reducing credential leakage risk.

2. ClawJacked Vulnerability Fix: Fixed a serious vulnerability where malicious websites could hijack local AI agents (CVE related), affecting over 17,500 instances. All users are strongly advised to update immediately.

3. Multiple RCE Vulnerability Fixes: Fixed remote code execution and other high-risk vulnerabilities, improving overall system security.

Update Recommendations:

  • Public Deployment: 🔴 Update immediately (critical security fix)
  • Local Testing: 🟡 Update within 24 hours
  • Production Environment: 🔴 Update at the next maintenance window

How to Update:

# One-click update
openclaw update

# Or manual update
npm i -g openclaw@latest

# Verify version
openclaw --version  # Should show 2026.3.13 or higher
💡
Compatibility Note

This update has no breaking changes. Configurations are automatically backward compatible. Safe to update.

Q63: 如何进行安全加固?Q63: How to perform security hardening?

A: 安全加固措施:

网络安全

  • 默认端口18789不应暴露在公网
  • 使用VPN或内网访问
  • 配置防火墙规则
  • 修改默认端口

系统安全

  • 切勿使用root或管理员账户运行OpenClaw
  • 创建专用用户

使用Docker隔离

docker run --user 1000:1000 ...

应用安全

  • 及时更新:openclaw update
  • 审查技能:只安装可信来源的技能
  • 定期检查已安装技能的更新

A: Security Hardening Measures:

Network Security:

  • Default port 18789 should not be exposed to the public internet
  • Use VPN or internal network access
  • Configure firewall rules
  • Change default port

System Security:

  • Never run OpenClaw with root or administrator accounts
  • Create a dedicated user

Using Docker Isolation:

docker run --user 1000:1000 ...

Application Security:

  • Update promptly: openclaw update
  • Review skills: Only install skills from trusted sources
  • Regularly check for updates to installed skills
Q66: 安全最佳实践有哪些?Q66: What are the security best practices?

A: 安全最佳实践总结:

  1. ❌ 不要在生产环境直接运行OpenClaw
  2. ✅ 使用Docker或虚拟机隔离
  3. ✅ 定期备份和更新
  4. ✅ 监控和审计所有操作
  5. ✅ 遵循最小权限原则
  6. ✅ 升级到2026.3.13或更高版本(已修复82+个安全漏洞,包括ClawJacked等关键漏洞)

A: Security Best Practices Summary:

  1. ❌ Do not run OpenClaw directly in production environments
  2. ✅ Use Docker or virtual machine isolation
  3. ✅ Regular backups and updates
  4. ✅ Monitor and audit all operations
  5. ✅ Follow the principle of least privilege
  6. ✅ Upgrade to version 2026.3.13 or higher (82+ security vulnerabilities fixed, including ClawJacked and other critical vulnerabilities)
💰

十、成本与优化篇10. Cost & Optimization

Q68: 各模型成本对比如何?Q68: How do model costs compare?

A: 模型成本对比:

模型 输入价格 输出价格 月费用估算
DeepSeek0.001元/千tokens0.002元/千tokens5-30元
Kimi0.012元/千tokens0.012元/千tokens10-50元
GLM-40.005元/千tokens0.005元/千tokens10-40元
Claude(第三方)0.015元/千tokens0.075元/千tokens50-200元
GPT-4(第三方)0.03元/千tokens0.06元/千tokens100-300元

省钱技巧

  • 日常对话用DeepSeek V3(最便宜,综合排名第一)
  • 长文档用Kimi 2.5(支持20万字上下文)
  • 编程任务用DeepSeek V3或GLM-4.7(能力强)
  • 数学推理用DeepSeek R1(专注推理)

A: Model cost comparison:

Model Input Price Output Price Estimated Monthly Cost
DeepSeek¥0.001/1k tokens¥0.002/1k tokens¥5–30
Kimi¥0.012/1k tokens¥0.012/1k tokens¥10–50
GLM-4¥0.005/1k tokens¥0.005/1k tokens¥10–40
Claude (3rd-party)¥0.015/1k tokens¥0.075/1k tokens¥50–200
GPT-4 (3rd-party)¥0.03/1k tokens¥0.06/1k tokens¥100–300

Money-saving tips:

  • Use DeepSeek V3 for daily conversation (cheapest, top overall ranking)
  • Use Kimi 2.5 for long documents (supports 200k character context)
  • Use DeepSeek V3 or GLM-4.7 for coding tasks (strong capability)
  • Use DeepSeek R1 for math reasoning (specialized for reasoning)
Q69: 与ChatGPT Plus成本对比如何?Q69: How does the cost compare to ChatGPT Plus?

A: 年度成本对比(OpenClaw本身免费,以下为AI模型API调用成本):

项目 OpenClaw + 国内API ChatGPT Plus Cursor Claude Pro
软件费用完全免费包含在订阅中包含在订阅中包含在订阅中
月费(API调用)5-50元(按量)140元(固定)140元(固定)140元(固定)
年费(API调用)60-600元(按量)1680元(固定)1680元(固定)1680元(固定)
相比OpenClaw方案多花费基准多花1080-1620元/年多花1080-1620元/年多花1080-1620元/年
OpenClaw节省-节省64%-96%节省64%-96%节省64%-96%
计费方式按实际使用付费固定月费固定月费固定月费
灵活性⭐⭐⭐⭐⭐ 用多少付多少⭐⭐ 用不用都付费⭐⭐ 用不用都付费⭐⭐ 用不用都付费
💡
成本说明

OpenClaw优势

  • 软件本身完全免费,开源无任何费用
  • 只需为AI模型API调用付费(DeepSeek、Kimi等)
  • 按量计费,用多少付多少,不用不花钱
  • 可选择多个模型供应商,灵活切换
  • 本地模型(如Ollama)完全免费使用

其他服务劣势

  • 固定月费,不管用不用都要付费
  • 无法自由切换模型供应商
  • 无法使用本地模型省钱

A: Annual cost comparison (OpenClaw itself is free; the following are AI model API call costs):

Item OpenClaw + Chinese API ChatGPT Plus Cursor Claude Pro
Software costCompletely freeIncluded in subscriptionIncluded in subscriptionIncluded in subscription
Monthly (API calls)¥5–50 (pay-as-you-go)¥140 (fixed)¥140 (fixed)¥140 (fixed)
Annual (API calls)¥60–600 (pay-as-you-go)¥1,680 (fixed)¥1,680 (fixed)¥1,680 (fixed)
Extra cost vs OpenClawBaseline¥1,080–1,620 more/year¥1,080–1,620 more/year¥1,080–1,620 more/year
OpenClaw savings-Save 64%–96%Save 64%–96%Save 64%–96%
Billing methodPay per actual usageFixed monthly feeFixed monthly feeFixed monthly fee
Flexibility⭐⭐⭐⭐⭐ Pay only what you use⭐⭐ Pay whether you use it or not⭐⭐ Pay whether you use it or not⭐⭐ Pay whether you use it or not
💡
Cost Notes

OpenClaw advantages:

  • The software itself is completely free, open source with no fees
  • Only pay for AI model API calls (DeepSeek, Kimi, etc.)
  • Pay-as-you-go: only pay for what you use
  • Choose from multiple model providers, switch flexibly
  • Local models (e.g., Ollama) are completely free

Other services' disadvantages:

  • Fixed monthly fee regardless of usage
  • Cannot freely switch model providers
  • Cannot save money with local models
🏢

十一、一人公司实战篇11. One-Person Company Practice

Q70: 一人公司的核心方法论是什么?Q70: What is the core methodology of a one-person company?

A: 一人公司的核心方法论:

AI是真的颠覆生产力

  • 不是内容本身被AI替代
  • 而是内容生产的效率被AI拉高了一个数量级
  • 以前需要一个团队干的活,现在一个人加上AI就能搞定

一人公司的3个核心能力

  1. 判断力 - AI负责执行,人负责判断
  2. 系统化思维 - 搭建AI系统,而不是用AI做事
  3. 快速迭代能力 - 先跑通最小闭环,再持续优化

A: Core methodology of a one-person company:

AI truly disrupts productivity:

  • Content itself is not replaced by AI
  • Rather, AI elevates the efficiency of content production by an order of magnitude
  • Work that used to require a whole team can now be done by one person with AI

3 core capabilities of a one-person company:

  1. Judgment - AI handles execution, humans handle judgment
  2. Systems thinking - Build AI systems, not just use AI for tasks
  3. Rapid iteration - First run the minimum viable loop, then continuously optimize
Q72: 一人公司的铁律有哪些?Q72: What are the golden rules of a one-person company?

A: 一人公司的5个铁律:

铁律1:绝不自动发布

  • 原因:AI可能出现事实错误、措辞可能不当、判断可能有偏差
  • 流程:Agent生成 → 人工审核 → 确认后发布
  • AI负责效率,人负责质量底线

铁律2:定时任务是灵魂

  • 让系统"推着你走"
  • 不依赖"想起来了才用"
  • 形成稳定的工作节奏

铁律3:记录一切

  • 数据驱动决策
  • 避免重复劳动
  • 持续优化系统

铁律4:快速迭代

  • 不要追求完美
  • 先跑通最小闭环
  • 快速上线测试,根据反馈迭代

铁律5:专注核心价值

  • 判断(选题、质量、方向)是核心价值
  • 执行(写作、设计、发布)不是核心价值
  • 把重复的交给系统,把判断留给自己

A: The 5 golden rules of a one-person company:

Rule 1: Never auto-publish

  • Reason: AI may have factual errors, inappropriate phrasing, or biased judgment
  • Process: Agent generates → Human reviews → Publish after confirmation
  • AI handles efficiency, humans maintain quality standards

Rule 2: Scheduled tasks are the soul

  • Let the system "push you forward"
  • Don't rely on "use only when you remember"
  • Form a stable working rhythm

Rule 3: Record everything

  • Data-driven decision making
  • Avoid repetitive work
  • Continuously optimize the system

Rule 4: Iterate quickly

  • Don't chase perfection
  • First run the minimum viable loop
  • Launch and test quickly, iterate based on feedback

Rule 5: Focus on core value

  • Judgment (topic selection, quality, direction) is core value
  • Execution (writing, design, publishing) is not core value
  • Delegate repetitive tasks to the system, keep judgment for yourself
Q75: 如何快速开始一人公司?Q75: How to quickly start a one-person company?

A: 快速开始步骤:

步骤1:选择场景

  • 场景A:内容创作自动化 - 适合自媒体、博主、内容创作者
  • 场景B:社群运营自动化 - 适合创业者、产品经理、社群运营
  • 场景C:混合场景 - 内容+社群双轮驱动

步骤2:搭建基础设施

# 1. 安装OpenClaw
curl -fsSL https://openclaw.ai/install.sh | bash

# 2. 创建工作目录
mkdir -p ~/.openclaw/workspace
mkdir -p ~/.openclaw/scripts
mkdir -p ~/.openclaw/config

# 3. 配置API
openclaw config set api.key "your-api-key"

步骤3:配置定时任务

crontab -e
# 添加定时任务
0 9 * * * /path/to/openclaw run daily-topic-push

步骤4:测试运行

openclaw run daily-topic-push
openclaw ask "写一篇关于AI编程的文章"

A: Quick start steps:

Step 1: Choose your scenario

  • Scenario A: Content creation automation – for self-media creators, bloggers, content creators
  • Scenario B: Community management automation – for entrepreneurs, product managers, community operators
  • Scenario C: Mixed scenario – dual-engine of content + community

Step 2: Set up infrastructure

# 1. Install OpenClaw
curl -fsSL https://openclaw.ai/install.sh | bash

# 2. Create working directories
mkdir -p ~/.openclaw/workspace
mkdir -p ~/.openclaw/scripts
mkdir -p ~/.openclaw/config

# 3. Configure API
openclaw config set api.key "your-api-key"

Step 3: Configure scheduled tasks

crontab -e
# Add scheduled task
0 9 * * * /path/to/openclaw run daily-topic-push

Step 4: Test run

openclaw run daily-topic-push
openclaw ask "Write an article about AI programming"
🛒

产品购买Product Purchase

OpenClaw 预装产品OpenClaw Pre-installed Products

A: 我们提供预装OpenClaw的迷你主机产品,开箱即用,无需繁琐配置:

A: We offer mini host products with OpenClaw pre-installed, ready to use out of the box with no complicated setup required:

🎁
产品优势Product Advantages

✅ 开箱即用 - 预装OpenClaw,无需手动安装配置✅ Ready to Use - OpenClaw pre-installed, no manual setup required

✅ 性能强劲 - AMD最新处理器,流畅运行AI助手✅ Powerful Performance - Latest AMD processor, smooth AI assistant operation

✅ 独立SSD - 可选购预装OpenClaw的独立SSD硬盘✅ Independent SSD - Optional standalone SSD with OpenClaw pre-installed

✅ 官方支持 - 提供完整的技术支持和售后服务✅ Official Support - Complete technical support and after-sales service

📦 迷你主机产品📦 Mini Host Products

产品型号Product Model 处理器Processor 购买平台Purchase Platform 链接Link
SER9 Pro H255 (龙虾版)SER9 Pro H255 (Lobster Edition) AMD Ryzen AI 9 H255 天猫Tmall 立即购买 🛒Buy Now 🛒
SER9 Pro H255 (龙虾版)SER9 Pro H255 (Lobster Edition) AMD Ryzen AI 9 H255 京东JD.com 立即购买 🛒Buy Now 🛒
SER9 Pro HX370 (龙虾版)SER9 Pro HX370 (Lobster Edition) AMD Ryzen AI 9 HX370 天猫Tmall 立即购买 🛒Buy Now 🛒
SER9 Pro HX370 (龙虾版)SER9 Pro HX370 (Lobster Edition) AMD Ryzen AI 9 HX370 京东JD.com 立即购买 🛒Buy Now 🛒

💾 预装OpenClaw 独立SSD💾 OpenClaw Pre-installed Standalone SSD

产品Product 说明Description 购买链接Purchase Link
预装OpenClaw SSD硬盘OpenClaw Pre-installed SSD 可独立使用的固态硬盘,预装完整OpenClaw系统Standalone SSD with full OpenClaw system pre-installed 官网购买 🛒Buy from Official Site 🛒
💡
购买建议Purchase Suggestions

H255 vs HX370 选择建议:H255 vs HX370 Selection Guide:

  • H255适合个人用户、轻度使用,性价比高Suitable for individual users, light usage, high cost-performance
  • HX370适合重度用户、多人使用,性能更强Suitable for heavy users, multi-user scenarios, more powerful performance

独立SSD优势:Standalone SSD Advantages:

  • 可在不同电脑间迁移使用Can be migrated between different computers
  • 不占用主机硬盘空间Does not occupy host hard drive space
  • 便于备份和恢复Easy to backup and restore
💬

用户常见问题解答Frequently Asked Questions

📌
说明Note

以下是购买我们预装OpenClaw产品的用户反馈的实际问题及解答,希望能帮助您快速上手使用。Below are actual questions and answers from users who purchased our pre-installed OpenClaw products. We hope this helps you get started quickly.

Q1: 买你们的小龙虾主机,我到手就可以用吗,还需要我配置什么吗?Q1: Can I use the mini host right out of the box? Do I need to configure anything?

A: 需要进行简单的初始化设置:

  1. 开机:接上电源和显示器,按下开机键
  2. 打开终端:进入系统后打开终端
  3. 运行引导向导
    openclaw onboard --install-daemon
  4. 按提示完成初始化设置
    • 配置AI模型(选择模型、输入API密钥等)
    • 设置认证方式
    • 可选:配置平台连接
  5. 开始使用:访问控制面板 http://127.0.0.1:18789/
📖
使用指南

详细使用指南位于 /home 目录下,请查阅获取更多信息。

A: Simple initialization setup is required:

  1. Power On: Connect power and monitor, press the power button
  2. Open Terminal: Open terminal after entering the system
  3. Run Setup Wizard:
    openclaw onboard --install-daemon
  4. Complete Initialization According to Prompts:
    • Configure AI model (select model, enter API key, etc.)
    • Set authentication method
    • Optional: Configure platform connection
  5. Start Using: Access control panel at http://127.0.0.1:18789/
📖
User Guide

Detailed user guide is located in the /home directory, please refer for more information.

Q2: 用你们的小龙虾SSD,我到手加装就可以用吗,还需要我配置什么吗?Q2: Can I use the SSD right after installing it? Do I need to configure anything?

A: SSD需要加装到支持的主机上,并进行初始化设置:

使用步骤:

  1. 加装SSD:将SSD安装到您的主机(需支持NVMe协议)
  2. 设置启动项:在BIOS中设置从该SSD启动
  3. 启动系统:SSD中预装了完整的Ubuntu + OpenClaw系统
  4. 打开终端:进入系统后打开终端
  5. 运行引导向导
    openclaw onboard --install-daemon
  6. 按提示完成初始化设置
📖
使用指南

详细使用指南位于 /home 目录下,请查阅获取更多信息。

⚠️
注意事项

• SSD系统已预装,但硬件环境可能不同,首次启动可能需要适配驱动

• 建议在相似配置的机器上使用以获得最佳体验

• 如遇问题,可联系技术支持

A: The SSD needs to be installed on a compatible host and initialized:

Usage Steps:

  1. Install SSD: Install the SSD into your host (NVMe protocol required)
  2. Set Boot Order: Set boot from this SSD in BIOS
  3. Start System: SSD comes with complete Ubuntu + OpenClaw system pre-installed
  4. Open Terminal: Open terminal after entering the system
  5. Run Setup Wizard:
    openclaw onboard --install-daemon
  6. Complete Initialization According to Prompts
📖
User Guide

Detailed user guide is located in the /home directory, please refer for more information.

⚠️
Notes

• SSD system is pre-installed, but hardware environment may differ, driver adaptation may be needed on first boot

• Recommended to use on machines with similar configuration for best experience

• Contact technical support if issues arise

Q3: 你们龙虾红装的Ubuntu是什么版本的,模型是几B的?Q3: What Ubuntu version is pre-installed? What model size (in B)?

A: 我们的预装配置:

产品型号 Ubuntu版本 预装模型
GTR9Ubuntu 24.04.03 LTSQwen 3.5-35B-A3B
SER10 470Ubuntu 24.04.03 LTSQwen 3.5-9B
SER9 370Ubuntu 24.04.03 LTSQwen 3.5-9B
其他产品Ubuntu 24.04.03 LTS无预装本地模型(推荐使用云端模型)
💡
模型说明

Qwen 3.5-35B-A3B:高性能模型,适合复杂任务

Qwen 3.5-9B:平衡性能与速度,适合日常使用

• 其他产品推荐使用云端模型(如DeepSeek、Kimi等),效果更好

A: Our pre-installed configuration:

Product Model Ubuntu Version Pre-installed Model
GTR9Ubuntu 24.04.03 LTSQwen 3.5-35B-A3B
SER10 470Ubuntu 24.04.03 LTSQwen 3.5-9B
SER9 370Ubuntu 24.04.03 LTSQwen 3.5-9B
Other ProductsUbuntu 24.04.03 LTSNo local model pre-installed (Cloud models recommended)
💡
Model Description

Qwen 3.5-35B-A3B: High-performance model, suitable for complex tasks

Qwen 3.5-9B: Balanced performance and speed, suitable for daily use

• Other products recommended to use cloud models (such as DeepSeek, Kimi, etc.) for better results

Q4: 本地模型和云端模型有什么区别,哪个好?Q4: What is the difference between local and cloud models? Which is better?

A: 对比分析:

对比项 本地模型 云端模型
成本完全免费按量付费(5-50元/月)
隐私数据不离开本地,最安全数据传输到云端
速度依赖本地硬件云端算力强,速度快
能力7B-14B参数,适合日常任务70B-600B+参数,能力更强
网络无需网络(离线可用)需要稳定网络
推荐场景日常对话、隐私敏感场景复杂任务、编程、专业领域

建议:

  • 日常使用本地模型(免费)
  • 复杂任务切换云端模型(按需付费)
  • 隐私敏感内容只用本地模型

A: Comparison Analysis:

Criteria Local Model Cloud Model
CostCompletely FreePay-as-you-go (5-50 CNY/month)
PrivacyData stays local, most secureData transmitted to cloud
SpeedDepends on local hardwareStrong cloud computing power, fast
Capability7B-14B parameters, suitable for daily tasks70B-600B+ parameters, more capable
NetworkNo network required (offline available)Requires stable network
Recommended ScenariosDaily conversation, privacy-sensitive scenariosComplex tasks, programming, professional fields

Recommendations:

  • Use local models for daily use (free)
  • Switch to cloud models for complex tasks (pay-as-you-go)
  • Only use local models for privacy-sensitive content
Q5: GTi15Ultra本地部署模型+OpenClaw,交互期间只会占用CPU,不会动用GPU,导致输出速度很慢Q5: GTi15Ultra local model + OpenClaw only uses CPU, not GPU, causing slow output

A: Intel核显需要使用专门的优化版Ollama:

解决方案:使用Intel优化版Ollama

# Intel优化版Ollama项目地址
https://www.modelscope.cn/models/Intel/ollama

安装步骤:

  1. 访问上述项目地址下载Intel优化版
  2. 按照项目说明安装配置
  3. 配置环境变量以使用Intel GPU
⚠️
重要提示

• Intel核显算力有限,即使使用优化版速度仍然较慢

不推荐GTi15使用本地模型,除非有外接显卡

• 建议使用云端模型(DeepSeek、Kimi等),速度更快、效果更好

替代方案:使用云端模型

# 切换到云端模型
openclaw models switch deepseek

# 或使用Kimi
openclaw models switch kimi

A: Intel integrated graphics require a specially optimized version of Ollama:

Solution: Use Intel Optimized Ollama

# Intel Optimized Ollama project address
https://www.modelscope.cn/models/Intel/ollama

Installation Steps:

  1. Visit the project address above to download Intel optimized version
  2. Install and configure according to project instructions
  3. Configure environment variables to use Intel GPU
⚠️
Important Notice

• Intel integrated graphics have limited computing power, speed is still slow even with optimized version

GTi15 is not recommended for local models unless with external graphics card

• Cloud models (DeepSeek, Kimi, etc.) are recommended for faster speed and better results

Alternative: Use Cloud Models

# Switch to cloud model
openclaw models switch deepseek

# Or use Kimi
openclaw models switch kimi
Q6: 怎么启动、重启、关闭、卸载OpenClaw?Q6: How to start, restart, stop, and uninstall OpenClaw?

A: OpenClaw服务管理命令:

操作 命令
启动网关openclaw gateway start
停止网关openclaw gateway stop
重启网关openclaw gateway restart
查看状态openclaw gateway status
前台运行openclaw gateway --port 18789
卸载OpenClawnpm uninstall -g openclaw

Windows服务管理:

# 安装为系统服务(开机自启)
openclaw service install

# 启动服务
openclaw service start

# 停止服务
openclaw service stop

# 卸载服务
openclaw service uninstall

A: OpenClaw Service Management Commands:

Operation Command
Start Gatewayopenclaw gateway start
Stop Gatewayopenclaw gateway stop
Restart Gatewayopenclaw gateway restart
Check Statusopenclaw gateway status
Run in Foregroundopenclaw gateway --port 18789
Uninstall OpenClawnpm uninstall -g openclaw

Windows Service Management:

# Install as system service (auto-start on boot)
openclaw service install

# Start service
openclaw service start

# Stop service
openclaw service stop

# Uninstall service
openclaw service uninstall
Q7: 首次使用需要做什么?怎么打开网关?Q7: What do I need to do on first use? How to open the gateway?

A: 首次使用流程:

方式一:预装产品(已完成)

  1. 开机后运行引导向导:
    openclaw onboard --install-daemon --install-daemon
  2. 按提示完成初始化设置
  3. 打开浏览器访问:http://127.0.0.1:18789/
  4. 开始使用

方式二:全新安装

  1. 运行引导向导
    openclaw onboard --install-daemon --install-daemon
  2. 按提示配置
    • 选择AI模型(DeepSeek/Kimi/Claude等)
    • 输入API密钥
    • 配置认证方式
  3. 启动网关
    openclaw gateway start
  4. 打开控制面板
    openclaw dashboard

A: First-time setup process:

Option 1: Pre-installed product (already set up)

  1. After booting, run the onboarding wizard:
    openclaw onboard --install-daemon --install-daemon
  2. Follow the prompts to complete initialization
  3. Open browser and visit: http://127.0.0.1:18789/
  4. Start using

Option 2: Fresh installation

  1. Run the onboarding wizard:
    openclaw onboard --install-daemon --install-daemon
  2. Configure as prompted:
    • Select an AI model (DeepSeek/Kimi/Claude, etc.)
    • Enter API key
    • Configure authentication method
  3. Start the gateway:
    openclaw gateway start
  4. Open the control panel:
    openclaw dashboard
Q8: 怎么连接到飞书、企业微信等应用,怎么创建机器人?Q8: How to connect Lark/Feishu, WeCom, and other apps? How to create a bot?

A: 各平台连接方法:

飞书连接步骤:

  1. 创建飞书应用
    • 访问 飞书开放平台
    • 创建企业自建应用
    • 配置应用权限(消息、通讯录等)
  2. 配置OpenClaw
    # 安装飞书插件
    openclaw plugins install @openclaw/plugin-lark
    
    # 配置飞书
    openclaw configure
  3. 扫码绑定:运行命令后扫码完成绑定

企业微信连接步骤:

  1. 访问企业微信管理后台创建应用
  2. 配置回调URL:http://your-ip:18789/webhook/wechat
  3. 运行配置命令:openclaw configure
📖

A: Connection methods for each platform:

Feishu/Lark connection steps:

  1. Create a Feishu app:
    • Visit Feishu Open Platform
    • Create an in-house enterprise app
    • Configure app permissions (messages, contacts, etc.)
  2. Configure OpenClaw:
    # Install Feishu plugin
    openclaw plugins install @openclaw/plugin-lark
    
    # Configure Feishu
    openclaw configure
  3. Scan QR code: Run the command then scan the QR code to complete binding

WeCom connection steps:

  1. Visit WeCom admin console to create an app
  2. Configure callback URL: http://your-ip:18789/webhook/wechat
  3. Run configuration command: openclaw configure
Q9: 怎么配置多模型、备用模型,怎么切换?Q9: How to configure multiple models and fallback models? How to switch between them?

A: 配置多个模型并切换:

方法一:配置文件方式

编辑配置文件 ~/.openclaw/config.json

{
  "models": [
    {
      "id": "deepseek-local",
      "name": "DeepSeek本地",
      "provider": "ollama",
      "model": "deepseek-r1:7b"
    },
    {
      "id": "kimi-cloud",
      "name": "Kimi云端",
      "provider": "moonshot",
      "apiKey": "your-api-key"
    },
    {
      "id": "claude-cloud",
      "name": "Claude云端",
      "provider": "anthropic",
      "apiKey": "your-api-key"
    }
  ],
  "defaultModel": "deepseek-local",
  "fallbackModel": "kimi-cloud"
}

方法二:命令行切换

# 查看可用模型
openclaw models list

# 切换模型
openclaw models switch kimi-cloud

# 设置默认模型
openclaw models set-default deepseek-local

方法三:对话时指定

# 在对话中使用特定模型
@kimi 请帮我分析这个问题
@claude 帮我写一段代码

A: Configure multiple models and switch between them:

Method 1: Configuration file

Edit the config file ~/.openclaw/config.json:

{
  "models": [
    {
      "id": "deepseek-local",
      "name": "DeepSeek Local",
      "provider": "ollama",
      "model": "deepseek-r1:7b"
    },
    {
      "id": "kimi-cloud",
      "name": "Kimi Cloud",
      "provider": "moonshot",
      "apiKey": "your-api-key"
    },
    {
      "id": "claude-cloud",
      "name": "Claude Cloud",
      "provider": "anthropic",
      "apiKey": "your-api-key"
    }
  ],
  "defaultModel": "deepseek-local",
  "fallbackModel": "kimi-cloud"
}

Method 2: Command-line switching

# View available models
openclaw models list

# Switch model
openclaw models switch kimi-cloud

# Set default model
openclaw models set-default deepseek-local

Method 3: Specify during conversation

# Use a specific model in conversation
@kimi Please help me analyze this problem
@claude Help me write some code
Q10: 飞书OpenClaw怎么发送图片?Q10: How to send images with OpenClaw in Feishu?

A: 飞书发送图片的方法:

方式一:直接发送

  • 在飞书对话框中直接粘贴或拖拽图片
  • OpenClaw会自动接收并处理图片

方式二:让AI生成图片

请帮我生成一张关于...的图片

方式三:让AI分析图片

# 发送图片后
请分析这张图片的内容
⚠️
注意事项

• 确保飞书应用已配置图片消息权限

• 部分模型不支持图片输入(如DeepSeek V3)

• 推荐使用Claude或GPT-4 Vision进行图片分析

A: Methods for sending images in Feishu:

Method 1: Direct sending

  • Paste or drag an image directly into the Feishu chat
  • OpenClaw will automatically receive and process the image

Method 2: Ask AI to generate an image

Please generate an image about...

Method 3: Ask AI to analyze an image

# After sending an image
Please analyze the content of this image
⚠️
Notes

• Ensure the Feishu app has image message permissions configured

• Some models don't support image input (e.g., DeepSeek V3)

• Recommended: Use Claude or GPT-4 Vision for image analysis

Q11: 你们部署的模型可以切换吗,我可以导入自己的模型吗?Q11: Can I switch the deployed models? Can I import my own model?

A: 完全可以!

切换模型:

# 查看已安装模型
ollama list

# 切换模型
openclaw models switch 

导入自己的模型:

方法一:使用llama.cpp导入

# 1. 准备模型文件(GGUF格式)
# 2. 创建Modelfile
FROM ./your-model.gguf

# 3. 使用ollama导入
ollama create my-model -f Modelfile

# 4. 在OpenClaw中使用
openclaw models add --name my-model --provider ollama

方法二:从HuggingFace下载

# 直接拉取模型
ollama pull 

# 例如
ollama pull qwen2.5:7b
支持格式

• GGUF格式(推荐,Ollama原生支持)

• SafeTensors格式

• PyTorch格式(需要转换)

A: Absolutely!

Switching models:

# View installed models
ollama list

# Switch model
openclaw models switch 

Importing your own model:

Method 1: Import using llama.cpp

# 1. Prepare model file (GGUF format)
# 2. Create Modelfile
FROM ./your-model.gguf

# 3. Import using ollama
ollama create my-model -f Modelfile

# 4. Use in OpenClaw
openclaw models add --name my-model --provider ollama

Method 2: Download from HuggingFace

# Pull model directly
ollama pull 

# For example
ollama pull qwen2.5:7b
Supported Formats

• GGUF format (recommended, Ollama native support)

• SafeTensors format

• PyTorch format (requires conversion)

Q12: 通过npm i -g openclaw@latest安装提示npm error...Q12: Installing via npm i -g openclaw@latest shows npm error...

A: 常见安装错误及解决方法:

错误1:网络超时

# 使用国内镜像
npm config set registry https://registry.npmmirror.com
npm i -g openclaw@latest

错误2:权限不足

# Windows:以管理员身份运行PowerShell
# Linux/Mac:使用sudo
sudo npm i -g openclaw@latest

错误3:Node版本过低

# 检查Node版本(需要22+)
node --version

# 升级Node.js
# Windows: 从官网下载最新版
# Linux/Mac:
curl -fsSL https://deb.nodesource.com/setup_24.x | sudo -E bash -
sudo apt-get install -y nodejs

错误4:缓存问题

# 清除缓存
npm cache clean --force
npm i -g openclaw@latest
💡
推荐:使用一键安装

macOS/Linux: curl -fsSL https://clawd.org.cn/install.sh | bash

Windows: iwr -useb https://clawd.org.cn/install.ps1 | iex

A: Common installation errors and solutions:

Error 1: Network timeout

# Use Chinese mirror
npm config set registry https://registry.npmmirror.com
npm i -g openclaw@latest

Error 2: Insufficient permissions

# Windows: Run PowerShell as administrator
# Linux/Mac: Use sudo
sudo npm i -g openclaw@latest

Error 3: Node version too low

# Check Node version (requires 22+)
node --version

# Upgrade Node.js
# Windows: Download latest from official website
# Linux/Mac:
curl -fsSL https://deb.nodesource.com/setup_24.x | sudo -E bash -
sudo apt-get install -y nodejs

Error 4: Cache issue

# Clear cache
npm cache clean --force
npm i -g openclaw@latest
💡
Recommended: Use one-click installer

macOS/Linux: curl -fsSL https://clawd.org.cn/install.sh | bash

Windows: iwr -useb https://clawd.org.cn/install.ps1 | iex

Q13: 初始化OpenClaw提示安装的技能都是什么意思?Q13: What do the skills suggested during OpenClaw initialization mean?

A: OpenClaw默认推荐的技能说明:

技能名称 功能说明 是否推荐安装
memory长期记忆,AI能记住之前的对话✅ 强烈推荐
browser浏览器自动化,可搜索网页、截图✅ 推荐
files文件管理,读写本地文件✅ 推荐
shell执行命令行命令⚠️ 谨慎使用
knowledge知识库管理,可导入文档学习✅ 推荐
assistant任务规划和执行助手✅ 推荐
💡
建议

首次安装建议全部选择"Yes",后续可在ClawHub安装更多技能。

A: Description of OpenClaw's default recommended skills:

Skill Name Description Recommended?
memoryLong-term memory, AI remembers previous conversations✅ Strongly recommended
browserBrowser automation, can search web pages and take screenshots✅ Recommended
filesFile management, read and write local files✅ Recommended
shellExecute command-line commands⚠️ Use with caution
knowledgeKnowledge base management, import documents for learning✅ Recommended
assistantTask planning and execution assistant✅ Recommended
💡
Advice

For first-time installation, it's recommended to select "Yes" for all. You can install more skills from ClawHub later.

Q14: 初始化OpenClaw提示的Channel、Hooks是什么意思,要不要配置?Q14: What do Channel and Hooks mean during OpenClaw initialization? Do I need to configure them?

A: 这些是高级配置项:

Channel(频道)

  • 用于连接飞书、企业微信、Telegram等平台
  • 首次使用:可以跳过,后续再配置
  • 需要连接平台时:必须配置

Hooks(钩子)

  • 消息处理前后的自定义逻辑
  • 首次使用:可以跳过
  • 高级用户:可实现消息过滤、自动回复等
建议

首次使用时跳过Channel和Hooks配置,先在控制面板中测试基本功能,熟悉后再配置平台连接。

A: These are advanced configuration options:

Channel:

  • Used to connect Feishu, WeCom, Telegram and other platforms
  • First use: Can skip, configure later
  • When connecting to a platform: Must configure

Hooks:

  • Custom logic before/after message processing
  • First use: Can skip
  • Advanced users: Can implement message filtering, auto-reply, etc.
Advice

Skip Channel and Hooks configuration on first use. First test basic functionality in the control panel, then configure platform connections after getting familiar.

Q15: Win11下使用飞书官方插件安装命令,提示扫码配置机器人的二维码乱码无法扫描Q15: On Win11 using the Feishu official plugin install command, the QR code for bot setup appears garbled and cannot be scanned

A: 二维码乱码是终端编码问题,解决方法:

方法一:使用Windows Terminal

  1. 从Microsoft Store安装Windows Terminal
  2. 在Windows Terminal中运行命令
  3. 二维码会正常显示

方法二:使用浏览器配置

# 跳过扫码,使用网页配置
openclaw configure --web

方法三:手动配置

# 生成配置链接
openclaw pairing url

# 在浏览器中打开链接完成配置

方法四:保存二维码图片

# 将二维码保存为图片
openclaw pairing qr > qrcode.txt
# 或
openclaw pairing qr --output qrcode.png
💡
推荐方案

使用Windows Terminal或PowerShell 7+,可以完美显示二维码。

A: Garbled QR code is a terminal encoding issue. Solutions:

Method 1: Use Windows Terminal

  1. Install Windows Terminal from Microsoft Store
  2. Run the command in Windows Terminal
  3. QR code will display correctly

Method 2: Use browser configuration

# Skip QR scan, use web configuration
openclaw configure --web

Method 3: Manual configuration

# Generate configuration link
openclaw pairing url

# Open the link in browser to complete configuration

Method 4: Save QR code as image

# Save QR code as image
openclaw pairing qr > qrcode.txt
# or
openclaw pairing qr --output qrcode.png
💡
Recommended Solution

Use Windows Terminal or PowerShell 7+, which can display QR codes perfectly.

Q16: 怎么安装Skills,龙虾会自动调用吗?Q16: How to install Skills? Will OpenClaw call them automatically?

A: Skills安装和调用:

安装Skills:

# 方式一:从ClawHub安装
openclaw skills install 

# 方式二:从GitHub安装
openclaw skills install github:user/repo

# 方式三:从本地安装
openclaw skills install ./path/to/skill

查看已安装Skills:

# 列出所有已安装技能
openclaw skills list

# 查看技能详情
openclaw skills info 

自动调用机制:

  • 会自动调用:当用户请求需要某个技能时,OpenClaw会智能判断并自动调用
  • 无需手动触发:AI会根据对话内容自动选择合适的技能
  • 例如:说"帮我搜索一下...",AI会自动调用browser技能
💡
示例

用户:"帮我搜索最新的AI新闻"

AI:自动调用browser技能 → 搜索网页 → 返回结果

A: Installing and invoking Skills:

Installing Skills:

# Method 1: Install from ClawHub
openclaw skills install 

# Method 2: Install from GitHub
openclaw skills install github:user/repo

# Method 3: Install from local
openclaw skills install ./path/to/skill

View installed Skills:

# List all installed skills
openclaw skills list

# View skill details
openclaw skills info 

Auto-invocation mechanism:

  • Will auto-invoke: When a user request needs a skill, OpenClaw will intelligently judge and automatically call it
  • No manual triggering needed: AI will automatically select the appropriate skill based on conversation content
  • Example: Say "help me search for...", AI will automatically call the browser skill
💡
Example

User: "Help me search for the latest AI news"

AI: Auto-invokes browser skill → Searches web → Returns results

Q17: WorkSpace里的MEMORY、AGENT、SOUL、USER、HEARTBEATS等文件都是什么意思,要不要配置?Q17: What do MEMORY, AGENT, SOUL, USER, HEARTBEATS files in WorkSpace mean? Do I need to configure them?

A: WorkSpace文件说明:

文件/目录 说明 是否需要配置
MEMORY长期记忆存储,AI记住的对话内容❌ 自动管理
AGENTAI智能体配置,性格、行为设置✅ 可自定义
SOUL核心人格定义,AI的"灵魂"✅ 可自定义
USER用户配置文件❌ 自动管理
HEARTBEATS定时任务配置✅ 需配置定时任务时
knowledge/知识库文件✅ 导入文档时
💡
建议

• 首次使用:无需手动配置,使用默认设置即可

• 高级自定义:修改SOUL和AGENT可以改变AI性格

• 定时任务:配置HEARTBEATS实现自动化

A: WorkSpace file descriptions:

File/Directory Description Need to Configure?
MEMORYLong-term memory storage, AI's remembered conversations❌ Auto-managed
AGENTAI agent configuration, personality and behavior settings✅ Customizable
SOULCore personality definition, AI's "soul"✅ Customizable
USERUser configuration file❌ Auto-managed
HEARTBEATSScheduled task configuration✅ When configuring scheduled tasks
knowledge/Knowledge base files✅ When importing documents
💡
Advice

• First use: No manual configuration needed, use default settings

• Advanced customization: Modify SOUL and AGENT to change AI personality

• Scheduled tasks: Configure HEARTBEATS for automation

Example of customizing AI personality (modify SOUL file):

You are a professional technical assistant, skilled in programming and problem solving.
Response style: concise and clear, focused on practical implementation.
When uncertain, honestly acknowledge it and provide suggestions.
Q18: npm安装很慢怎么办?Q18: npm installation is very slow, what should I do?

A: 使用国内镜像源加速:

方法一:临时使用镜像

# 安装时指定镜像
npm i -g openclaw --registry=https://registry.npmmirror.com

方法二:永久配置镜像

# 设置淘宝镜像
npm config set registry https://registry.npmmirror.com

# 验证配置
npm config get registry

方法三:使用nrm管理镜像源

# 安装nrm
npm i -g nrm

# 查看可用镜像源
nrm ls

# 切换到淘宝源
nrm use taobao

# 测试速度
nrm test

方法四:使用cnpm

# 安装cnpm
npm i -g cnpm --registry=https://registry.npmmirror.com

# 使用cnpm安装
cnpm i -g openclaw
🚀
推荐镜像源

• 淘宝镜像:https://registry.npmmirror.com

• 腾讯镜像:https://mirrors.cloud.tencent.com/npm/

A: Use Chinese mirror sources to speed up:

Method 1: Temporary mirror use

# Specify mirror during installation
npm i -g openclaw --registry=https://registry.npmmirror.com

Method 2: Permanently configure mirror

# Set Taobao mirror
npm config set registry https://registry.npmmirror.com

# Verify configuration
npm config get registry

Method 3: Use nrm to manage mirror sources

# Install nrm
npm i -g nrm

# View available mirror sources
nrm ls

# Switch to Taobao source
nrm use taobao

# Test speed
nrm test

Method 4: Use cnpm

# Install cnpm
npm i -g cnpm --registry=https://registry.npmmirror.com

# Install using cnpm
cnpm i -g openclaw
🚀
Recommended Mirror Sources

• Taobao mirror: https://registry.npmmirror.com

• Tencent mirror: https://mirrors.cloud.tencent.com/npm/

Q19: 什么模型好用?Q19: Which AI model is good to use?

A: 根据最新前端智能体评测排行,推荐模型及适用场景:

排名 模型 特点 适用场景 成本
🥇 1DeepSeek V3综合评分最高,中文能力极强通用对话、写作、编程、推理最低
🥈 2GLM-4.7智谱最新模型,综合能力强通用对话、写作、编程
🥉 3GLM-4.5智谱成熟模型,稳定性好通用对话、日常任务
4Qwen3.5-Turbo阿里最新模型,性价比极高日常对话、编程、分析
5Kimi 2.5长上下文,支持20万字长文档分析、论文阅读
6DeepSeek R1推理能力强,适合复杂任务数学推理、代码生成最低

按场景推荐:

  • 综合能力最强:DeepSeek V3(榜首)、GLM-4.7
  • 长文档处理:Kimi 2.5(支持20万字上下文)
  • 性价比最高:DeepSeek V3、Qwen3.5-Turbo
  • 编程开发:DeepSeek V3、GLM-4.7、Qwen3.5-Turbo
  • 数学推理:DeepSeek R1(专注推理能力)
  • 预算有限:DeepSeek V3(性价比之王)
💡
推荐配置

• 主力模型:DeepSeek V3(综合排名第一)或 GLM-4.7

• 备用模型:Kimi 2.5(处理长文档)

• 省钱方案:DeepSeek V3

A: Based on the latest frontend agent benchmark rankings, recommended models and use cases:

Rank Model Features Best For Cost
🥇 1DeepSeek V3Highest overall score, excellent Chinese capabilityGeneral chat, writing, coding, reasoningLowest
🥈 2GLM-4.7Zhipu's latest model, strong overall capabilityGeneral chat, writing, codingMedium
🥉 3GLM-4.5Zhipu mature model, good stabilityGeneral chat, daily tasksMedium
4Qwen3.5-TurboAlibaba's latest model, excellent valueDaily chat, coding, analysisLow
5Kimi 2.5Long context, supports 200k charactersLong document analysis, paper readingLow
6DeepSeek R1Strong reasoning, good for complex tasksMath reasoning, code generationLowest

By use case:

  • Best overall: DeepSeek V3 (top ranked), GLM-4.7
  • Long document processing: Kimi 2.5 (supports 200k character context)
  • Best value: DeepSeek V3, Qwen3.5-Turbo
  • Programming/development: DeepSeek V3, GLM-4.7, Qwen3.5-Turbo
  • Math reasoning: DeepSeek R1 (specialized for reasoning)
  • Budget-conscious: DeepSeek V3 (best value champion)
💡
Recommended Setup

• Primary model: DeepSeek V3 (overall #1) or GLM-4.7

• Backup model: Kimi 2.5 (for long documents)

• Budget option: DeepSeek V3

Q20: 龙虾调用Browser工具启动失败Q20: OpenClaw Browser tool fails to start

A: Browser工具启动失败的解决方法:

原因1:浏览器未安装

# 检查Chrome/Edge是否安装
# Windows: 检查 C:\Program Files\Google\Chrome
# Linux: which google-chrome
# Mac: ls /Applications/Google Chrome.app

原因2:端口被占用

# 检查端口占用
netstat -ano | findstr :18789

# 杀掉占用进程
taskkill /PID <进程ID> /F

原因3:权限问题

# Linux/Mac:添加执行权限
chmod +x /usr/bin/google-chrome

原因4:依赖缺失(Linux)

# Ubuntu/Debian
sudo apt-get install -y libnss3 libatk1.0-0 libatk-bridge2.0-0 \
libcups2 libdrm2 libxkbcommon0 libxcomposite1 libxdamage1 \
libxrandr2 libgbm1 libasound2

# CentOS/RHEL
sudo yum install -y nss atk at-spi2-atk cups-libs libdrm \
libxkbcommon libXcomposite libXdamage libXrandr mesa-libgbm alsa-lib

原因5:Headless模式问题

# 在配置中禁用headless模式
# ~/.openclaw/config.json
{
  "browser": {
    "headless": false
  }
}

A: Solutions for Browser tool startup failure:

Cause 1: Browser not installed

# Check if Chrome/Edge is installed
# Windows: Check C:\Program Files\Google\Chrome
# Linux: which google-chrome
# Mac: ls /Applications/Google Chrome.app

Cause 2: Port occupied

# Check port usage
netstat -ano | findstr :18789

# Kill the occupying process
taskkill /PID <ProcessID> /F

Cause 3: Permission issue

# Linux/Mac: Add execute permission
chmod +x /usr/bin/google-chrome

Cause 4: Missing dependencies (Linux)

# Ubuntu/Debian
sudo apt-get install -y libnss3 libatk1.0-0 libatk-bridge2.0-0 \
libcups2 libdrm2 libxkbcommon0 libxcomposite1 libxdamage1 \
libxrandr2 libgbm1 libasound2

# CentOS/RHEL
sudo yum install -y nss atk at-spi2-atk cups-libs libdrm \
libxkbcommon libXcomposite libXdamage libXrandr mesa-libgbm alsa-lib

Cause 5: Headless mode issue

# Disable headless mode in config
# ~/.openclaw/config.json
{
  "browser": {
    "headless": false
  }
}
Q21: 怎么指定龙虾调用Chrome(默认Edge)?Q21: How to specify Chrome for OpenClaw (default is Edge)?

A: 指定浏览器的方法:

方法一:配置文件设置

# 编辑配置文件 ~/.openclaw/config.json
{
  "browser": {
    "browser": "chrome",
    "executablePath": "C:\\Program Files\\Google\\Chrome\\Application\\chrome.exe"
  }
}

方法二:环境变量

# Windows (PowerShell)
$env:BROWSER="chrome"
openclaw gateway start

# Linux/Mac
export BROWSER=chrome
openclaw gateway start

方法三:命令行参数

# 启动时指定浏览器
openclaw gateway start --browser chrome

常见浏览器路径:

浏览器 Windows路径 Linux路径
ChromeC:\Program Files\Google\Chrome\Application\chrome.exe/usr/bin/google-chrome
EdgeC:\Program Files (x86)\Microsoft\Edge\Application\msedge.exe/usr/bin/microsoft-edge
FirefoxC:\Program Files\Mozilla Firefox\firefox.exe/usr/bin/firefox

A: Methods to specify the browser:

Method 1: Configuration file

# Edit config file ~/.openclaw/config.json
{
  "browser": {
    "browser": "chrome",
    "executablePath": "C:\\Program Files\\Google\\Chrome\\Application\\chrome.exe"
  }
}

Method 2: Environment variable

# Windows (PowerShell)
$env:BROWSER="chrome"
openclaw gateway start

# Linux/Mac
export BROWSER=chrome
openclaw gateway start

Method 3: Command-line argument

# Specify browser on startup
openclaw gateway start --browser chrome

Common browser paths:

Browser Windows Path Linux Path
ChromeC:\Program Files\Google\Chrome\Application\chrome.exe/usr/bin/google-chrome
EdgeC:\Program Files (x86)\Microsoft\Edge\Application\msedge.exe/usr/bin/microsoft-edge
FirefoxC:\Program Files\Mozilla Firefox\firefox.exe/usr/bin/firefox
Q22: OpenClaw(绑定飞书)每次回复都会引用我的发言,这样会不会导致消费的tokens增加?Q22: OpenClaw (bound to Feishu) quotes my messages with every reply — does this increase token consumption?

A: 是的,引用会增加token消耗,但这是必要的:

为什么会引用:

  • 飞书消息机制:AI需要知道上下文才能正确回复
  • 多轮对话:引用历史消息实现连续对话

影响分析:

  • 正常影响:每轮对话增加约10-30%的token消耗
  • 必要消耗:没有上下文AI无法理解对话

优化方法:

# 方法1:限制引用长度
# ~/.openclaw/config.json
{
  "channels": {
    "lark": {
      "maxQuoteLength": 500  // 只引用最近500字
    }
  }
}

# 方法2:关闭引用(不推荐,会失去上下文)
{
  "channels": {
    "lark": {
      "quoteMessage": false
    }
  }
}
💡
建议

保持引用功能开启,增加的token成本很小,但能保证对话质量。

A: Yes, quoting increases token consumption, but it is necessary:

Why quoting occurs:

  • Feishu message mechanism: AI needs context to reply correctly
  • Multi-turn conversation: Quoting history messages enables continuous dialogue

Impact analysis:

  • Normal impact: Increases token consumption by about 10–30% per conversation turn
  • Necessary cost: Without context, AI cannot understand the conversation

Optimization methods:

# Method 1: Limit quote length
# ~/.openclaw/config.json
{
  "channels": {
    "lark": {
      "maxQuoteLength": 500  // Only quote the last 500 characters
    }
  }
}

# Method 2: Disable quoting (not recommended, loses context)
{
  "channels": {
    "lark": {
      "quoteMessage": false
    }
  }
}
💡
Advice

Keep quoting enabled — the added token cost is small but ensures conversation quality.

Q23: API rate limit reached是什么意思?Q23: What does "API rate limit reached" mean?

A: 这是API调用频率限制错误:

原因:

  • 短时间内请求次数超过限制
  • 不同API服务商有不同的限制规则

各平台限制参考:

平台 免费额度 限制规则
DeepSeek500万tokens/月60次/分钟
Kimi100万tokens/月30次/分钟
Claude按付费套餐根据套餐不同
OpenAI按付费套餐根据套餐不同

解决方法:

# 方法1:等待几分钟后重试
# 通常限制会在1-5分钟后解除

# 方法2:配置备用模型
# ~/.openclaw/config.json
{
  "fallbackModel": "kimi-cloud",  // 主模型达到限制时自动切换
  "retryAfter": 60  // 60秒后自动重试
}

# 方法3:升级API套餐
# 访问各平台官网升级付费套餐获得更高限制

# 方法4:使用本地模型(无限制)
openclaw models switch deepseek-local

A: This is an API call frequency limit error:

Cause:

  • Too many requests in a short period
  • Different API providers have different limit rules

Rate limits by platform:

Platform Free Quota Rate Limit
DeepSeek5M tokens/month60 req/min
Kimi1M tokens/month30 req/min
ClaudePaid planVaries by plan
OpenAIPaid planVaries by plan

Solutions:

# Method 1: Wait a few minutes and retry
# Usually the limit lifts after 1–5 minutes

# Method 2: Configure fallback model
# ~/.openclaw/config.json
{
  "fallbackModel": "kimi-cloud",  // Auto-switch when primary model hits limit
  "retryAfter": 60  // Auto-retry after 60 seconds
}

# Method 3: Upgrade API plan
# Visit the platform's website to upgrade for higher limits

# Method 4: Use local model (no limits)
openclaw models switch deepseek-local
Q24: 安装飞书官方插件后,启动OpenClaw后的自检无法获取准确状态,提示18789端口被占用,gateway进程僵死(但不影响使用)Q24: After installing the Feishu official plugin, OpenClaw self-check cannot get accurate status, shows port 18789 occupied, gateway process zombie (but does not affect use)

A: 这是进程管理冲突问题,解决方法:

方法一:清理僵尸进程

# Windows
tasklist | findstr openclaw
taskkill /F /IM node.exe

# Linux/Mac
ps aux | grep openclaw
kill -9 $(pgrep -f openclaw)

方法二:检查端口占用

# Windows
netstat -ano | findstr :18789
taskkill /PID  /F

# Linux/Mac
lsof -i :18789
kill -9 

方法三:重启服务

# 停止服务
openclaw gateway stop

# 等待5秒
sleep 5

# 启动服务
openclaw gateway start

方法四:更换端口

# 使用其他端口启动
openclaw gateway start --port 18790
⚠️
注意事项

• 如果不影响使用,可以暂时忽略此警告

• 定期重启OpenClaw可避免进程累积

• 建议使用系统服务方式管理(openclaw service install)

A: This is a process management conflict. Solutions:

Method 1: Clean up zombie processes

# Windows
tasklist | findstr openclaw
taskkill /F /IM node.exe

# Linux/Mac
ps aux | grep openclaw
kill -9 $(pgrep -f openclaw)

Method 2: Check port usage

# Windows
netstat -ano | findstr :18789
taskkill /PID <PID> /F

# Linux/Mac
lsof -i :18789
kill -9 <PID>

Method 3: Restart service

# Stop service
openclaw gateway stop

# Wait 5 seconds
sleep 5

# Start service
openclaw gateway start

Method 4: Change port

# Start on a different port
openclaw gateway start --port 18790
⚠️
Notes

• If usage is not affected, you can temporarily ignore this warning

• Regularly restarting OpenClaw avoids process accumulation

• Recommended: Manage as a system service (openclaw service install)

Q25: 怎么让OpenClaw调用大模型的时候不使用深度思考功能?Q25: How to make OpenClaw call the LLM without using the deep thinking feature?

A: 关闭深度思考(Extended Thinking)的方法:

方法一:对话中临时关闭

# 在对话中说明
请直接回答,不要深度思考

或

@fast 请帮我解决这个问题

方法二:配置文件关闭

# ~/.openclaw/config.json
{
  "models": {
    "anthropic": {
      "extendedThinking": false
    }
  }
}

方法三:使用快速模式

# 启用快速模式(响应速度快3倍)
# openclaw gateway start

# 或在对话中使用
/fast 你的问题

深度思考 vs 快速模式对比:

模式 响应速度 回答质量 Token消耗
深度思考慢(5-10秒)最高
标准模式中(2-5秒)
快速模式快(1-2秒)良好
💡
建议

• 简单问题:使用快速模式

• 复杂推理:使用深度思考

• 日常对话:标准模式即可

A: Methods to disable Extended Thinking:

Method 1: Disable in conversation

# Say in conversation
Please answer directly, no deep thinking needed

or

@fast Please help me with this issue

Method 2: Disable in config file

# ~/.openclaw/config.json
{
  "models": {
    "anthropic": {
      "extendedThinking": false
    }
  }
}

Method 3: Use fast mode

# Enable fast mode (3x faster response)
# openclaw gateway start

# Or use in conversation
/fast your question

Deep thinking vs Fast mode comparison:

Mode Response Speed Answer Quality Token Usage
Deep thinkingSlow (5–10s)HighestHigh
Standard modeMedium (2–5s)HighMedium
Fast modeFast (1–2s)GoodLow
💡
Advice

• Simple questions: Use fast mode

• Complex reasoning: Use deep thinking

• Daily conversation: Standard mode is fine

Q26: OpenClaw进入面板后提示:origin not allowed (open the Control UI from the gateway host or allow it in gateway.controlUi.allowedOrigins)Q26: OpenClaw panel shows: origin not allowed (open the Control UI from the gateway host or allow it in gateway.controlUi.allowedOrigins)

A: 这是跨域访问限制,解决方法:

原因:

  • 从非本机IP访问控制面板
  • 使用域名访问而非localhost

方法一:从本机访问(推荐)

# 使用localhost访问
http://localhost:18789

# 或使用127.0.0.1
http://127.0.0.1:18789

方法二:配置允许的来源

# ~/.openclaw/config.json
{
  "gateway": {
    "controlUi": {
      "allowedOrigins": [
        "http://localhost:18789",
        "http://127.0.0.1:18789",
        "http://your-ip:18789",
        "http://your-domain.com"
      ]
    }
  }
}

方法三:允许所有来源(不推荐,有安全风险)

# ~/.openclaw/config.json
{
  "gateway": {
    "controlUi": {
      "allowedOrigins": ["*"]
    }
  }
}

方法四:使用Tailscale等内网穿透

# 安装Tailscale后,使用Tailscale IP访问
# 会自动获得HTTPS支持,无跨域问题
⚠️
安全警告

允许所有来源(*)会有安全风险,建议只添加信任的域名/IP。

A: This is a cross-origin access restriction. Solutions:

Cause:

  • Accessing the control panel from a non-local IP
  • Using a domain name instead of localhost

Method 1: Access from local machine (recommended)

# Use localhost
http://localhost:18789

# Or use 127.0.0.1
http://127.0.0.1:18789

Method 2: Configure allowed origins

# ~/.openclaw/config.json
{
  "gateway": {
    "controlUi": {
      "allowedOrigins": [
        "http://localhost:18789",
        "http://127.0.0.1:18789",
        "http://your-ip:18789",
        "http://your-domain.com"
      ]
    }
  }
}

Method 3: Allow all origins (not recommended — security risk)

# ~/.openclaw/config.json
{
  "gateway": {
    "controlUi": {
      "allowedOrigins": ["*"]
    }
  }
}

Method 4: Use Tailscale or similar internal tunnel

# After installing Tailscale, access via Tailscale IP
# Automatically gets HTTPS support with no cross-origin issues
⚠️
Security Warning

Allowing all origins (*) creates security risks. Only add trusted domains/IPs.

Q27: control ui requires device identity (use HTTPS or localhost secure context) 此页面为 HTTP,因此浏览器阻止设备标识。请使用 HTTPS (Tailscale Serve) 或在网关主机上打开Q27: "control ui requires device identity (use HTTPS or localhost secure context)" — browser blocks device identity because page is HTTP. Use HTTPS (Tailscale Serve) or open on gateway host

A: 这需要HTTPS安全上下文,解决方法:

方法一:在本机访问(最简单)

# 在运行OpenClaw的机器上打开浏览器
http://localhost:18789
# 或
http://127.0.0.1:18789

方法二:使用Tailscale(推荐远程访问)

# 1. 安装Tailscale
# Windows: 从官网下载安装
# Linux: curl -fsSL https://tailscale.com/install.sh | sh

# 2. 启动Tailscale
tailscale up

# 3. 使用Tailscale IP访问
https://:18789

方法三:配置HTTPS(高级)

# 使用Let's Encrypt证书
sudo apt install certbot
sudo certbot certonly --standalone -d your-domain.com

# 配置OpenClaw使用HTTPS
# ~/.openclaw/config.json
{
  "gateway": {
    "https": {
      "enabled": true,
      "cert": "/etc/letsencrypt/live/your-domain.com/fullchain.pem",
      "key": "/etc/letsencrypt/live/your-domain.com/privkey.pem"
    }
  }
}

方法四:使用反向代理

# Nginx配置示例
server {
    listen 443 ssl;
    server_name your-domain.com;

    ssl_certificate /path/to/cert.pem;
    ssl_certificate_key /path/to/key.pem;

    location / {
        proxy_pass http://127.0.0.1:18789;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}
推荐方案

• 本机使用:直接访问 localhost

• 远程访问:使用Tailscale(最简单)

• 公网访问:配置HTTPS + 域名

A: This requires an HTTPS secure context. Solutions:

Method 1: Access locally (simplest)

# Open browser on the machine running OpenClaw
http://localhost:18789
# or
http://127.0.0.1:18789

Method 2: Use Tailscale (recommended for remote access)

# 1. Install Tailscale
# Windows: Download from official website
# Linux: curl -fsSL https://tailscale.com/install.sh | sh

# 2. Start Tailscale
tailscale up

# 3. Access via Tailscale IP
https://<tailscale-ip>:18789

Method 3: Configure HTTPS (advanced)

# Use Let's Encrypt certificate
sudo apt install certbot
sudo certbot certonly --standalone -d your-domain.com

# Configure OpenClaw to use HTTPS
# ~/.openclaw/config.json
{
  "gateway": {
    "https": {
      "enabled": true,
      "cert": "/etc/letsencrypt/live/your-domain.com/fullchain.pem",
      "key": "/etc/letsencrypt/live/your-domain.com/privkey.pem"
    }
  }
}

Method 4: Use reverse proxy

# Nginx configuration example
server {
    listen 443 ssl;
    server_name your-domain.com;

    ssl_certificate /path/to/cert.pem;
    ssl_certificate_key /path/to/key.pem;

    location / {
        proxy_pass http://127.0.0.1:18789;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}
Recommended Solutions

• Local use: Access directly via localhost

• Remote access: Use Tailscale (simplest)

• Public access: Configure HTTPS + domain name

Q28: Clawhub安装技能提示Rate limit exceededQ28: ClawHub skill installation shows "Rate limit exceeded"

A: ClawHub下载频率限制,解决方法:

原因:

  • 短时间内下载次数过多
  • ClawHub服务器有频率限制

解决方法:

方法一:等待后重试

# 通常等待5-10分钟后可继续下载
# 建议分批安装技能,不要一次性安装太多

方法二:使用镜像源

# 配置ClawHub镜像
openclaw config set clawhub.mirror https://mirror.clawhub.cn

# 重新安装
openclaw skills install 

方法三:从GitHub安装

# 绕过ClawHub直接从GitHub安装
openclaw skills install github:openclaw/skills/

方法四:本地安装

# 1. 从GitHub下载技能包
git clone https://github.com/openclaw/skills.git

# 2. 本地安装
openclaw skills install ./skills/

方法五:使用认证(提高限制)

# 登录ClawHub账号
openclaw login

# 认证用户有更高的下载限制
💡
建议

• 分批安装技能,每次安装2-3个

• 优先安装必要的技能

• 使用GitHub镜像绕过限制

A: ClawHub download rate limit exceeded. Solutions:

Cause:

  • Too many downloads in a short period
  • ClawHub server has rate limits

Solutions:

Method 1: Wait and retry

# Usually can resume downloading after 5–10 minutes
# Recommend installing skills in batches, not all at once

Method 2: Use mirror source

# Configure ClawHub mirror
openclaw config set clawhub.mirror https://mirror.clawhub.cn

# Reinstall
openclaw skills install <skill-name>

Method 3: Install from GitHub

# Bypass ClawHub and install directly from GitHub
openclaw skills install github:openclaw/skills/<skill-name>

Method 4: Local installation

# 1. Download skill package from GitHub
git clone https://github.com/openclaw/skills.git

# 2. Install locally
openclaw skills install ./skills/<skill-name>

Method 5: Authenticate (higher limits)

# Log in to ClawHub account
openclaw login

# Authenticated users have higher download limits
💡
Advice

• Install skills in batches, 2–3 at a time

• Prioritize essential skills

• Use GitHub mirror to bypass limits

Q29: 调用浏览器工具时报错:[tools] browser failed: Error: Failed to start chrome CDP on port 18800 for profile "openclaw"Q29: Error when calling browser tool: [tools] browser failed: Error: Failed to start chrome CDP on port 18800 for profile "openclaw"

A: Chrome DevTools Protocol启动失败,解决方法:

原因1:端口被占用

# 检查18800端口
# Windows
netstat -ano | findstr :18789
taskkill /PID  /F

# Linux/Mac
lsof -i :18789
kill -9 

原因2:Chrome配置文件冲突

# 删除OpenClaw的Chrome配置文件
# Windows
rmdir /s /q "%USERPROFILE%\.openclaw\chrome-profile"

# Linux/Mac
rm -rf ~/.openclaw/chrome-profile

# 重启OpenClaw
openclaw gateway restart

原因3:Chrome未正确安装

# 检查Chrome是否安装
google-chrome --version

# 如未安装
# Ubuntu/Debian
wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
sudo dpkg -i google-chrome-stable_current_amd64.deb

# Windows: 从官网下载安装

原因4:权限问题

# Linux: 确保Chrome有执行权限
sudo chmod +x /usr/bin/google-chrome

# 检查配置目录权限
ls -la ~/.openclaw/

原因5:更换端口

# 配置文件中更改CDP端口
# ~/.openclaw/config.json
{
  "tools": {
    "browser": {
      "cdpPort": 18801  // 使用其他端口
    }
  }
}

原因6:使用Headless模式

# 禁用headless模式尝试
# ~/.openclaw/config.json
{
  "tools": {
    "browser": {
      "headless": false
    }
  }
}
快速解决步骤

1. 删除chrome-profile目录

2. 检查端口是否被占用

3. 重启OpenClaw

4. 如仍有问题,尝试更换端口

A: Chrome DevTools Protocol startup failure. Solutions:

Cause 1: Port occupied

# Check port 18800
# Windows
netstat -ano | findstr :18789
taskkill /PID <PID> /F

# Linux/Mac
lsof -i :18789
kill -9 <PID>

Cause 2: Chrome profile conflict

# Delete OpenClaw's Chrome profile
# Windows
rmdir /s /q "%USERPROFILE%\.openclaw\chrome-profile"

# Linux/Mac
rm -rf ~/.openclaw/chrome-profile

# Restart OpenClaw
openclaw gateway restart

Cause 3: Chrome not properly installed

# Check Chrome installation
google-chrome --version

# If not installed:
# Ubuntu/Debian
wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
sudo dpkg -i google-chrome-stable_current_amd64.deb

# Windows: Download from official website

Cause 4: Permission issue

# Linux: Ensure Chrome has execute permission
sudo chmod +x /usr/bin/google-chrome

# Check config directory permissions
ls -la ~/.openclaw/

Cause 5: Change port

# Change CDP port in config
# ~/.openclaw/config.json
{
  "tools": {
    "browser": {
      "cdpPort": 18801  // Use a different port
    }
  }
}

Cause 6: Use headless mode

# Try disabling headless mode
# ~/.openclaw/config.json
{
  "tools": {
    "browser": {
      "headless": false
    }
  }
}
Quick Fix Steps

1. Delete chrome-profile directory

2. Check if port is occupied

3. Restart OpenClaw

4. If still failing, try changing port

📚

附录:资源与社区Appendix: Resources & Community

官方资源Official Resources
资源 地址
GitHub仓库https://github.com/openclaw/openclaw
官方文档https://docs.openclaw.ai
技能市场https://clawhub.ai
Resource URL
GitHub Repositoryhttps://github.com/openclaw/openclaw
Official Documentationhttps://docs.openclaw.ai
Skills Marketplacehttps://clawhub.ai
中文社区Chinese Community
资源 地址
OpenClaw CNhttps://clawd.org.cn
中文文档https://docs.clawd.org.cn
Discord社区https://discord.gg/clawd
Resource URL
OpenClaw CNhttps://clawd.org.cn
Chinese Documentationhttps://docs.clawd.org.cn
Discord Communityhttps://discord.gg/clawd
相关工具Related Tools
工具 地址 说明
Ollamahttps://ollama.com本地模型
ClawHubhttps://clawhub.ai技能市场
AgentSkillshttps://agentskills.io技能标准
Tool URL Description
Ollamahttps://ollama.comLocal Models
ClawHubhttps://clawhub.aiSkill Marketplace
AgentSkillshttps://agentskills.ioSkill Standard
🎯

结语Conclusion

"AI负责效率,人负责判断"

"把重复的交给系统,把判断留给自己"

"快速行动,持续迭代"

"AI handles efficiency, humans handle judgment"

"Let systems handle repetition, keep judgment for yourself"

"Move fast, iterate continuously"

🚀 开始你的AI自动化之旅,用OpenClaw放大你的核心能力!🚀 Start your AI automation journey, amplify your core capabilities with OpenClaw!