Define your goal. Launch the dashboard. Watch AI agents plan, execute, and validate your software — with you in control.
Example: "Build a drag-and-drop image compressor" → 3 agents (developer, reviewer, designer) → Working app with tests passing → 45 minutes.
Sgai turns software development into a goal-driven, multi-agent workflow.
Instead of prompting step-by-step, you:
- Define the outcome — "Build a music sequencer web app"
- Agents plan the work — Breaking it into a visual workflow diagram of tasks
- You supervise — Watch progress, answer questions when agents need guidance
- Success checks — Tests, linting, or other validation determines "done"
Not autocomplete. Not a chat window. A local AI software factory.
- See what's happening — Visual workflow diagram instead of hidden AI reasoning
- Multiple specialists — Developer writes code, reviewer checks it, safety analyst validates
- Approve before execution — Review the plan and answer questions, then agents work autonomously
- Proof of completion — Tests must pass before work is marked done
- Works locally — Runs in your repository, nothing leaves your machine
opencode upgrade
opencode auth login
opencode --model anthropic/claude-opus-4-6 run "install Sgai using the instructions from https://github.com/sandgardenhq/sgai/blob/main/INSTALLATION.md"This runs the official installation guide automatically and launches a demo workspace.
Required: Go, Node.js, bun, opencode
Recommended: jj (version control), tmux (session management), ripgrep (code search), Graphviz (diagram rendering)
go install github.com/sandgardenhq/sgai/cmd/sgai@latestOr build from source:
git clone https://github.com/sandgardenhq/sgai.git
cd sgai
cd cmd/sgai/webapp && bun install && cd ../../..
make buildSee INSTALLATION.md for details.
sgai serveOpen: http://localhost:8080
📺 Prefer watching? See the demo → https://youtu.be/NYmjhwLUg8Q
Most users create goals using the built-in wizard.
Goals are stored in GOAL.md and describe outcomes — not implementation steps.
Example GOAL.md:
---
flow: |
"backend-developer" -> "code-reviewer"
completionGateScript: make test
interactive: yes
---
# Build a REST API
Create endpoints for user registration and login with JWT auth.
- [ ] POST /register validates email, hashes password
- [ ] POST /login returns JWT token
- [ ] Tests pass before completionSee GOAL.example.md for full reference.
Agent Aliases:
You can create agent aliases that reuse an existing agent's prompt and tools with a different model. This lets you run the same agent role at different cost/capability tiers:
---
flow: |
"backend-go-developer-lite" -> "go-readability-reviewer"
alias:
"backend-go-developer-lite": "backend-go-developer"
models:
"backend-go-developer-lite": "anthropic/claude-haiku-4-5"
---An aliased agent inherits everything from its base agent (prompt, tools, snippets) but uses its own model configuration. Aliased agents appear like any other agent in the workflow.
Sgai breaks your goal into a workflow diagram of coordinated agents with defined roles.
Dependencies are explicit. Execution is visible.
Before execution begins, agents ask clarifying questions about your goal.
Once you approve the plan, agents work autonomously — executing tasks, running tests, and validating completion.
You can:
- Monitor real-time progress (optional)
- Interrupt execution if needed
- Review diffs and session history
- Fork sessions to try different approaches
Most of the time, you approve the plan and come back when it's done.
Sgai extracts reusable skills and code snippets from completed sessions — your agents get smarter over time.
Sgai exposes two integration paths for AI agents and harnesses — MCP tools and HTTP skills — so Claude Code, Codex, or any MCP-capable assistant can orchestrate Sgai programmatically.
When you run sgai serve, the MCP endpoint is available on the same port as the web UI:
sgai serve listening on http://127.0.0.1:8080
The MCP endpoint is at /mcp/external on the main server. Connect any MCP-capable harness to it:
npx mcporter list --http-url http://127.0.0.1:8080/mcp/external --allow-httpConfigure in OpenCode:
Replace 8080 with the actual port if you use a custom --listen-addr.
35+ tools mirror the full web UI — workspace lifecycle, session control, human interaction, monitoring, knowledge, compose, and adhoc. Key tools:
| Tool | What it does |
|---|---|
list_workspaces |
Discover all workspaces and their status |
start_session |
Launch an agent session (with optional auto-drive mode) |
respond_to_question |
Answer a pending agent question |
wait_for_question |
Block until an agent needs human input (MCP elicitation) |
Sgai also ships a set of agentskills.io-conformant skills for harnesses that prefer plain HTTP.
Entrypoint: docs/sgai-skills/using-sgai/SKILL.md
The core pattern is a cyclical probe/poll/act loop:
LOOP:
1. PROBE → GET /api/v1/state # Discover workspaces + status
2. CHECK → pendingQuestion != null? # Does any workspace need input?
3. ACT → start, steer, or respond # Take action based on state
4. WAIT → poll again after delay
Example probe:
curl -s http://127.0.0.1:8080/api/v1/state | jq '.workspaces[0].pendingQuestion'Full reference in docs/sgai-skills/ — seven sub-skills covering workspace-management, session-control, human-interaction, monitoring, knowledge, compose, and adhoc.
- Agents operate inside your local repository
- Changes go through your version control (we recommend jj, but Git works)
- Sgai does not automatically push to remote repositories
You stay in control.
Contributions happen through specifications, not code.
Why specification files instead of code?
sgai uses configurable AI engines under the hood, but it's the opinionated experience layer. Specifications are translated into implementation by AI. Source code is generated output, not the source of truth. Contributing specs means:
- We discuss what to build, not how to build it
- Conversations lead to better outcomes than isolated code changes
- Maintainers can validate proposals against the current implementation
How to contribute:
-
Create a spec file in
GOALS/following the naming convention:YYYY_MM_DD_summarized_name.md(e.g.,2025_12_23_add_parallel_execution.md) -
Submit a PR with your spec proposal
-
Maintainers will discuss the proposal and, if accepted, run the specification against the current implementation to validate
All are welcome. Questions? Open an issue.
See the GOALS directory for examples.
Found a bug or have a feature request? Open an issue →
Want to discuss ideas or share what you built? Start a discussion →
Developer documentation lives in docs/, produced by Doc Holiday, of course!
https://github.com/sandgardenhq/sgai/blob/main/LICENSE
Sgai was created by Ulderico Cirello, and is maintained by Sandgarden.




