โโโโโโ โโโ โโโโโโโโโโ โโโโโโโ โโโโโโโ โโโโโโ โโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโ โโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโ โโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโ โโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโ โโโโโโ โโโ โโโ โโโ โโโโโโโ โโโ โโโ โโโโโโโ โโโ โโโโโโ โโโ โณโณโโโโณโณโโโโณโโโ โโโ โโโโณโโโ โโโณโโโโณโณโโโโ โโโโณโโโ โโโโฃ โโโโโโฃโซโโซโโโฃโซโโโโฃโซโฃโซโฃ โโโฃ โฃโซโฃโซโโโโฃ โโโโโโฃโซโโซ โ โโโโ โโโโโโโ โโโโปโโโโโโโ โป โโโโโ โโโโโปโโโโโโโ Lightweight, private memory and code intelligence for AI coding assistants. Multi-agent orchestration that runs locally.
- Private & local - No API keys, no data leaves your machine. Works with Claude Code, Cursor, 20+ tools
- Smart Memory - Indexes code and docs locally. Ranks by recency, relevance, and access patterns
- Code Intelligence - LSP-powered: find unused code, check impact before refactoring, semantic search
- Multi-Agent Orchestration - Decompose goals, spawn agents, coordinate with recovery and state
- Execution - Run task lists with guardrails against dangerous commands and scope creep
- Friction Analysis - Extract learned rules from stuck patterns in past sessions
# New installation
pip install aurora-actr
# Upgrading?
pip install --upgrade aurora-actr
aur --version # Should show 0.13.2
# Uninstall
pip uninstall aurora-actr
# From source (development)
git clone https://github.com/hamr0/aurora.git
cd aurora && ./install.shaur mem search - Memory with activation decay. Indexes your code using:
- BM25 - Keyword search
- Git signals - Recent changes rank higher
- Tree-sitter/cAST - Code stored as class/method (Python, JS/TS, Go, Java)
- LSP enrichment - Risk level, usage count, complexity (see Code Intelligence below)
- Markdown indexing - Search docs, save tokens
# Terminal
aur mem index .
aur mem search "soar reasoning" --show-scores
Searching memory from /project/.aurora/memory.db...
Found 5 results for 'soar reasoning'
โโโโโโโโโโณโโโโโโโโโโโโโโโโโโโโโโโโโณโโโโโโโโโโโโโโโโโโโโโโโณโโโโโโโโโโโโโณโโโโโโโโโณโโโโโโณโโโโโโโโโโ
โ Type โ File โ Name โ Lines โ Risk โ Git โ Score โ
โกโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฉ
โ code โ core.py โ generate_goals_json โ 1091-1175 โ MED โ 8d โ 0.619 โ
โ code โ soar.py โ <chunk> โ 1473-1855 โ - โ 1d โ 0.589 โ
โ code โ orchestrator.py โ SOAROrchestrator._cโฆ โ 2141-2257 โ HIGH โ 1d โ 0.532 โ
โ code โ test_goals_startup_peโฆ โ TestGoalsCommandStaโฆ โ 190-273 โ LOW โ 1d โ 0.517 โ
โ code โ goals.py โ <chunk> โ 437-544 โ - โ 7d โ 0.486 โ
โโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโดโโโโโโโโโดโโโโโโดโโโโโโโโโโ
Avg scores: Activation 0.916 | Semantic 0.867 | Hybrid 0.801
Risk: LOW (0-2 refs) | MED (3-10) | HIGH (11+) ยท MCP: lsp check/impact/related
Refine your search:
--show-scores Detailed score breakdown (BM25, semantic, activation)
--show-content Preview code snippets
--limit N More results (e.g., --limit 20)
--type TYPE Filter: function, class, method, kb, code
--min-score 0.5 Higher relevance threshold
Detailed Score Breakdown:
โโ core.py | code | generate_goals_json (Lines 1091-1175) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Final Score: 0.619 โ
โ โโ BM25: 0.895 * (exact keyword match on 'goals') โ
โ โโ Semantic: 0.865 (high conceptual relevance) โ
โ โโ Activation: 0.014 (accessed 7x, 7 commits, last used 1 week ago) โ
โ โโ Git: 7 commits, modified 8d ago, 1769419365 โ
โ โโ Files: core.py, test_goals_json.py โ
โ โโ Used by: 2 files, 2 refs, complexity 44%, risk MED โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโAurora provides fast code intelligence via MCP tools - many operations use ripgrep instead of LSP for 100x speed.
| Tool | Action | Speed | Purpose |
|---|---|---|---|
lsp |
check |
~1s | Quick usage count before editing |
lsp |
impact |
~2s | Full impact analysis with top callers |
lsp |
deadcode |
2-20s | Find all unused symbols in directory |
lsp |
imports |
<1s | Find all files that import a module |
lsp |
related |
~50ms | Find outgoing calls (dependencies) |
mem_search |
- | <1s | Semantic search with LSP enrichment |
Risk levels: LOW (0-2 refs) โ MED (3-10) โ HIGH (11+)
When to use:
- Before editing:
lsp checkto see what depends on it - Before refactoring:
lsp impactto assess risk - Understanding dependencies:
lsp relatedto see what a function calls - Finding importers:
lsp importsto see who imports a module - Finding code:
mem_searchinstead of grep for semantic results - After changes:
lsp deadcodeto clean up orphaned code
Language support:
- Python: Full (LSP + tree-sitter complexity + import filtering + indexing)
- JavaScript/TypeScript: LSP refs + tree-sitter indexing + import filtering
- Go: LSP refs + tree-sitter indexing + import filtering
- Java: LSP refs + tree-sitter indexing + import filtering
See Code Intelligence Guide for all 16 features and implementation details.
aur goals - Decomposes any goal into subgoals:
- Looks up existing memory for matches
- Breaks down into subgoals
- Assigns your existing subagents to each subgoal
- Detects capability gaps - tells you what agents to create
Works across any domain (code, writing, research).
$ aur goals "how can i improve the speed of aur mem search that takes 30 seconds loading when
it starts" -t claude
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Aurora Goals โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ how can i improve the speed of aur mem search that takes 30 seconds loading when it starts โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Tool: claude โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Plan Decomposition Summary โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ Subgoals: 5 โ
โ โ
โ [++] Locate and identify the 'aur mem search' code in the codebase: @code-developer โ
โ [+] Analyze the startup/initialization logic to identify performance bottlenecks: โ
โ @code-developer (ideal: @performance-engineer) โ
โ [++] Review system architecture for potential design improvements (lazy loading, caching, โ
โ indexing): @system-architect โ
โ [++] Implement optimization strategies (lazy loading, caching, indexing, parallel โ
โ processing): @code-developer โ
โ [++] Measure and validate performance improvements with benchmarks: @quality-assurance โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Summary โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ Agent Matching: 4 excellent, 1 acceptable โ
โ Gaps Detected: 1 subgoals need attention โ
โ Context: 1 files (avg relevance: 0.60) โ
โ Complexity: COMPLEX โ
โ Source: soar โ
โ โ
โ Warnings: โ
โ ! Agent gaps detected: 1 subgoals need attention โ
โ โ
โ Legend: [++] excellent | [+] acceptable | [-] insufficient โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
aur soar - Research questions using your codebase:
- Looks up existing memory for matches
- Decomposes question into sub-questions
- Utilizes existing subagents
- Spawns agents on the fly
- Simple multi-orchestration with agent recovery (stateful)
aur soar "write a 3 paragraph sci-fi story about a bug the gained llm conscsiousness" -t claude
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Aurora SOAR โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ write a 3 paragraph sci-fi story about a bug the gained llm conscsiousness โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Tool: claude โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Initializing...
[ORCHESTRATOR] Phase 1: Assess
Analyzing query complexity...
Complexity: MEDIUM
[ORCHESTRATOR] Phase 2: Retrieve
Looking up memory index...
Matched: 10 chunks from memory
[LLM โ claude] Phase 3: Decompose
Breaking query into subgoals...
โ 1 subgoals identified
[LLM โ claude] Phase 4: Verify
Validating decomposition and assigning agents...
โ PASS (1 subgoals routed)
Plan Decomposition
โโโโโโโโณโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโณโโโโโโโโโโโโโโโโโโโโโโโณโโโโโโโโโโโโโโโ
โ # โ Subgoal โ Agent โ Match โ
โกโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฉ
โ 1 โ Write a 3-paragraph sci-fi short story about โ @creative-writer* โ โ Spawned โ
โโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโ
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Summary โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ 1 subgoal โข 0 assigned โข 1 spawned โ
โ โ
โ Spawned (no matching agent): @creative-writer โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโaur spawn - Takes predefined task list and executes with:
- Stop gates for feature creep
- Dangerous command detection (rm -rf, etc.)
- Budget limits
aur spawn tasks.md --verboseaur friction - Analyze stuck patterns across your coding sessions:
aur friction ~/.claude/projects
Per-Project:
my-app 56% BAD (40/72) median: 16.0 ๐ด
api-service 40% BAD (2/5) median: 0.5 ๐ก
web-client 0% BAD (0/1) median: 0.0 โ
Session Extremes:
WORST: aurora/0203-1630-11eb903a peak=225 turns=127
BEST: liteagents/0202-2121-8d8608e1 peak=0 turns=4
Last 2 Weeks:
2026-02-02 15 sessions 10 BAD โโโโโโโโโโ 67%
2026-02-03 29 sessions 12 BAD โโโโโโโโโโ 41%
2026-02-04 6 sessions 2 BAD โโโโโโโโโโ 33%
Verdict: โ USEFUL
Intervention predictability: 93%Identifies sessions where you got stuck and extracts learned rules ("antigens") to add to CLAUDE.md or your AI tool's instructions - preventing the same mistakes.
Terminal In your AI tool (Claude Code, Cursor, etc.)
โโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
aur init
aur goals "Add auth" โ /aur:plan add-auth โ /aur:implement add-auth
โ โ โ
goals.json PRD + tasks.md Code changes
(subgoals, agents) (ready to execute) (validated)
| Step | Command | Output |
|---|---|---|
| Setup (once) | aur init + complete project.md |
.aurora/ directory, indexed codebase |
| Decompose | aur goals "goal" |
Subgoals mapped to agents + source files |
| Plan | /aur:plan [id] |
PRD, design doc, tasks.md |
| Implement | /aur:implement [id] |
Code changes with validation |
| Regen tasks | /aur:tasks [id] |
Regenerate tasks after PRD edits (optional) |
Quick prototype? Skip
aur goalsand run/aur:plandirectly.
See 3 Simple Steps Guide for detailed walkthrough.
# Install (or upgrade with --upgrade flag)
pip install aurora-actr
# Initialize project (once per project)
cd your-project/
aur init # Creates .aurora/project.md
# IMPORTANT: Complete .aurora/project.md manually
# Ask your agent: "Please complete the project.md with our architecture and conventions"
# This context improves planning accuracy
# Index codebase for memory
aur mem index .
# Plan with memory context
aur goals "Add user authentication"
# In your CLI tool (Claude Code, Cursor, etc.):
/aur:plan add-user-authentication
/aur:implement add-user-authentication| Command | Description |
|---|---|
aur init |
Initialize Aurora in project |
aur doctor |
Check installation and dependencies |
aur mem index . |
Index code and docs |
aur mem search "query" |
Search memory from terminal |
aur goals "goal" |
Decompose goal, match agents, find gaps |
aur soar "question" |
Multi-agent research with memory |
aur spawn tasks.md |
Execute task list with guardrails |
aur friction <dir> |
Analyze session friction patterns |
| Command | Description |
|---|---|
/aur:plan [id] |
Generate PRD, design, tasks from goal |
/aur:tasks [id] |
Regenerate tasks after PRD edits |
/aur:implement [id] |
Execute plan tasks sequentially |
/aur:archive [id] |
Archive completed plan |
Works with 20+ CLI tools: Claude Code, Cursor, Aider, Cline, Windsurf, Gemini CLI, and more.
Configuration is per-project (not global) to keep your CLI clean:
cd /path/to/project
aur init --tools=claude,cursorMIT License - See LICENSE