Skip to content

Abel2333/fathom

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Fathom Deep Research Agent

A sophisticated multi-agent research system built with LangGraph and LangChain that conducts comprehensive, multi-layered research on any topic.

Features

  • 🔍 Deep Research - Multi-agent architecture with supervisor and parallel researchers
  • 📊 Real-Time Streaming - Live progress updates and streaming report generation
  • 🔭 Full Observability - Langfuse integration for LLM tracing and cost tracking
  • ⚙️ Highly Configurable - Support for multiple LLM providers and customizable research strategies
  • 📝 Comprehensive Reports - Markdown reports with citations and structured analysis

Quick Start

# 1. Install dependencies
uv sync

# 2. Configure environment
cp .env.example .env
# Edit .env with your API keys

# 3. Run your first research
fathom "What is LangGraph?"

Documentation

📚 Full Documentation - Complete guides and references

Quick Links

Architecture

User Query
    ↓
Clarification Agent → Research Brief
    ↓
Supervisor Agent → Plans research strategy
    ↓
Researcher Agents (parallel) → Conduct focused research
    ↓
Report Generator → Comprehensive markdown report

Configuration

Edit src/fathom/config/config.toml:

[research]
max_depth = 8              # Research iterations
concurrency = 4            # Parallel researchers

[llm]
report_model_name = "kimi-k2-thinking"
research_model_name = "kimi-k2-thinking"
timeout = 60
max_retries = 3

Environment Variables

Required:

API_KEY=your_llm_api_key
BASE_URL=https://api.deepseek.com
TAVILY_API_KEY=your_tavily_key

Optional (for observability):

LANGFUSE_PUBLIC_KEY=pk-lf-...
LANGFUSE_SECRET_KEY=sk-lf-...

See .env.example for complete template.

Example Usage

# Simple research
fathom "What are the latest developments in AI agents?"

# From stdin
echo "Explain quantum computing" | fathom

# View logs
tail -f logs/fathom.log

# Check reports
ls -lt reports/

Recent Improvements

  • Streaming Output - Real-time progress indicators during research
  • Langfuse Integration - Full LLM observability and cost tracking
  • Timeout Prevention - Streaming prevents timeouts with reasoning models
  • Better Error Handling - Comprehensive retry logic and graceful degradation

Requirements

  • Python 3.13+
  • LLM API key (DeepSeek, OpenAI, Anthropic, or compatible)
  • Tavily API key for web search
  • Optional: Langfuse account for observability

Support

License

[Add your license here]


Happy Researching! 🔍

For detailed documentation, see docs/README.md

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages