Declarative AI pipelines for the command line.
Define LLM workflows in YAML. Run Claude Code, Codex, and Gemini CLI in parallel. Version control everything.
🌐 comanda.sh — Getting started, features, and templates
brew install kris-hansen/comanda/comandaOr: go install github.com/kris-hansen/comanda@latest · Releases
comanda configure # Set up API keys
comanda generate "review this code for bugs" # Generate workflow from English
comanda process workflow.yaml # Run a workflowparallel-process:
claude:
input: STDIN
model: claude-code
action: "Analyze architecture"
output: $CLAUDE
gemini:
input: STDIN
model: gemini-cli
action: "Identify patterns"
output: $GEMINI
synthesize:
input: "Claude: $CLAUDE\nGemini: $GEMINI"
model: claude-code
action: "Combine into recommendations"
output: STDOUTcat main.go | comanda process multi-agent.yaml- Multi-agent — Claude Code, Gemini CLI, OpenAI Codex in parallel
- Agentic loops — Iterative refinement with tool use
- Codebase indexing — Persistent code context across workflows
- Git worktrees — Parallel execution in isolated branches
- All the I/O — Files, URLs, databases, images, chunking
See comanda.sh/features for full details.
MIT
