CrabClaw is an OpenClaw-compatible agentic coding toolchain written in Rust.
- Multi-channel: CLI, interactive REPL, and Telegram bot with whitelist access control
- Model agnostic: OpenAI-compatible (Chat Completions), native Anthropic (Messages API), and Codex (Responses API via OAuth)
- AgentLoop: Unified abstraction: route → model → tool → tape in a single
handle_inputcall - Skill engine: Auto-discovers
.agent/skills/and bridges them as LLM-callable tools - Shell execution: Run shell commands via
,git statusorshell.exectool, with failure self-correction - File operations:
file.read,file.write,file.edit,file.list,file.searchwith workspace-sandboxed security - Assistant routing: Comma-command auto-execution from assistant output is opt-in (
CRABCLAW_ENABLE_ASSISTANT_COMMANDS=true) - Tool calling loop: Up to 5-iteration autonomous reasoning in REPL and Telegram
- Progressive tool view: Token-efficient tool hinting — full schemas expand on demand
- Tape system: Append-only JSONL session recording with anchors, search, handoff, and context truncation
- System prompt: 3-tier priority — config override >
.agent/system-prompt.md> built-in default - Profile resolution:
.env.local, environment variables, CLI flags with deterministic precedence
- Install stable Rust toolchain.
- Authenticate (choose one):
# Option A: API Key cp .env.example .env.local # Edit .env.local — set API_KEY, BASE_URL, MODEL (e.g. MODEL=openai:gpt-4o) # Option B: OAuth (use your ChatGPT Plus/Pro subscription) cargo run -- auth login
- Build and verify:
cargo build && cargo test
- Choose your mode:
cargo run -- interactive # Interactive REPL cargo run -- run --prompt "..." # One-shot CLI cargo run -- serve # Telegram bot (requires TELEGRAM_BOT_TOKEN) cargo run -- auth status # Check auth status
CrabClaw supports three provider modes. All models must have a provider prefix:
| Prefix | Provider | API Format | Auth | Example |
|---|---|---|---|---|
openai: |
OpenAI-compatible | Chat Completions | API_KEY |
openai:gpt-4o |
anthropic: |
Anthropic | Messages API | API_KEY |
anthropic:claude-sonnet-4-20250514 |
codex: |
OpenAI Codex | Responses API | OAuth | codex:gpt-5.3-codex |
Works with OpenAI, OpenRouter, GLM, DeepSeek, or any OpenAI-compatible endpoint.
# .env.local
API_KEY=sk-xxx
BASE_URL=https://api.openai.com/v1 # or https://openrouter.ai/api/v1
MODEL=openai:gpt-4o # or anthropic:claude-sonnet-4-20250514Uses your ChatGPT subscription quota — no API credits needed.
# Step 1: Login via browser
cargo run -- auth login
# Step 2: Configure model
# .env.local
MODEL=codex:gpt-5.3-codex
# No API_KEY or BASE_URL needed — Codex uses chatgpt.com backendAvailable Codex models: gpt-5.3-codex, gpt-5-codex, gpt-5.1-codex-mini
cargo run -- auth login # Open browser for ChatGPT OAuth login
cargo run -- auth status # Check token expiry and refresh status
cargo run -- auth logout # Remove stored tokensTokens are stored in ~/.crabclaw/auth.json with automatic refresh.
Settings resolve in this order (first wins):
- CLI flags (
--api-key,--api-base,--model) - Profile-specific env vars (
PROFILE_<NAME>_API_KEY) - Environment variables (
API_KEY,BASE_URL,MODEL) .env.localfile- OAuth tokens (fallback when no
API_KEYis set) - Built-in defaults (
MODEL=openai:gpt-4o)
CODEX_REASONING_EFFORT=high # low | medium | high (default: high)By default, assistant text is treated as plain output and not executed as comma-commands.
CRABCLAW_ENABLE_ASSISTANT_COMMANDS=trueOnly enable this in trusted environments.
In REPL or Telegram, prefix commands with ,:
,help Show all commands
,tools List registered tools
,tool.describe file.read Show tool parameters
,git status Execute shell command
,tape.search <query> Search conversation history
,handoff Reset context window
Natural language input goes to the LLM, which can autonomously call tools:
> Read the Cargo.toml and tell me the project version
[tool] file.read → 1432 chars
The project version is 0.1.0...
# Enable pre-commit hook (runs cargo fmt + clippy before each commit)
git config core.hooksPath .githookscargo test # Run all tests (unit + integration + live if configured)
cargo clippy # Lint check
cargo fmt # Format
./scripts/smoke-test.sh # Full verification (build + clippy + tests + live API)| Suite | Command | Description |
|---|---|---|
| Unit tests | cargo test --lib |
All unit tests |
| CLI | cargo test --test cli_run |
CLI flag parsing, dry-run |
| AgentLoop | cargo test --test agent_loop_* |
Routing, tool calling |
| Telegram | cargo test --test telegram_* |
Channel routing, providers |
| OpenAI-compatible | cargo test --test openai_provider_integration |
Reply, tool call, error, rate limit |
| Live E2E | cargo test --test live_integration |
Requires API_KEY in .env.local |
- Architecture (EN) | 中文
- Feature test plan:
docs/test-plans/phase-1-mvp.md - Architecture decisions:
docs/adr/
Inspired by bub.