A local, deterministic SIEM demo application that ingests synthetic security telemetry, runs correlation detections, and supports AI-assisted alert triage.
This project is meant to demonstrate an end-to-end SOC workflow in a small local stack:
- generate realistic synthetic events (Windows, macOS, DNS, firewall)
- normalize raw payloads into a canonical schema
- run deterministic detections to create alerts
- inspect alerts in a lightweight UI
- run AI triage that cites evidence and saves case notes
It is a demo and learning project, not a production SIEM.
The app is split into four parts:
backend/: FastAPI app + SQLite storage + normalization + detection engine + triage endpointfrontend/: static HTML/CSS/JS UI served by backend at/uidemo/: synthetic data seeding, reset, one-command launcher, smoke testmcp_server/: MCP-compatible tool implementations (and optional standalone MCP server)
Main runtime flow:
- Synthetic events are inserted into SQLite.
- Detection rules evaluate event streams and create alerts.
- UI reads
/alertsand alert detail endpoints. - Triage endpoint gathers evidence via MCP tools and calls an LLM.
- Triage output is validated and persisted as a case note.
SIEM/
backend/
app/
data/
prompts/
frontend/
demo/
run_demo.py
reset_demo.py
seed_scenarios.py
smoke_test.sh
mcp_server/
- Python 3.9+
curlandsqlite3(for smoke test)- Optional:
jq(improves smoke test parsing) - Optional: OpenAI-compatible API credentials for AI triage
From repo root (SIEM/):
python3 -m venv .venv
source .venv/bin/activate
pip install -r backend/requirements.txt -r mcp_server/requirements.txtFrom repo root:
python3 demo/run_demo.pyBy default, this command:
- starts backend on
127.0.0.1:8000(or reuses an existing healthy backend) - resets demo tables and seeds initial synthetic data
- keeps seeding synthetic data every 45 seconds
UI URL:
http://127.0.0.1:8000/ui
# seed once, disable continuous seeding
python3 demo/run_demo.py --seed-interval-seconds 0
# keep existing DB rows, still run app
python3 demo/run_demo.py --no-reset-first
# enable backend autoreload for code changes
python3 demo/run_demo.py --reload
# combine options
python3 demo/run_demo.py --no-reset-first --reload --seed-interval-seconds 0- Press
Ctrl+Cin the terminal runningrun_demo.py.
AI triage (POST /alerts/{id}/triage or “Run Triage” in UI) needs an API key and model.
Set environment variables before starting backend:
export OPENAI_API_KEY="sk-..."
export OPENAI_MODEL="gpt-4.1-mini"
python3 demo/run_demo.py --no-reset-first --reloadOptional overrides:
SIEM_LLM_API_KEY(preferred override overOPENAI_API_KEY)SIEM_LLM_MODEL(preferred override overOPENAI_MODEL)SIEM_LLM_BASE_URL(default:https://api.openai.com/v1)SIEM_LLM_TIMEOUT_SECONDS(default:45)SIEM_TRIAGE_PROMPT_PATH(custom prompt file)
Note: if a backend is already running, run_demo.py reuses it. If you changed env vars, restart backend so it picks them up.
Core endpoints:
GET /healthPOST /ingestGET /events?limit=...GET /alerts?limit=...GET /alerts/{alert_id}GET /alerts/{alert_id}/case-note/latestPOST /alerts/{alert_id}/triage
UI endpoints:
GET /uiGET /ui/alerts/{alert_id}- static assets under
/ui-static/*
Current deterministic rules:
demo.office_powershell_encodeddemo.password_spray_successdemo.macos_sudo_after_sshdemo.suspicious_dns_outbound_connect
SQLite database path:
backend/data/siem_demo.db
Primary tables:
eventsalertscase_notestool_callsaudit_log
Run an end-to-end check (health, alerts, triage output, tool call logging, case note persistence):
bash demo/smoke_test.shThe smoke test expects:
- backend running on
localhost:8000(or overridden env vars) - seeded data with at least one alert
- valid LLM credentials if triage is enabled
The backend triage path calls tool functions directly in-process.
You can also run the MCP server separately (stdio transport):
python -m mcp_server.serverExposed tools:
get_alert(alert_id)search_events(tenant_id, query, time_start, time_end, limit)retrieve_runbook(rule_id)create_case_note(alert_id, content, summary)
Triage failed: Missing LLM API key
- ensure
OPENAI_API_KEY(orSIEM_LLM_API_KEY) is set - ensure
OPENAI_MODEL(orSIEM_LLM_MODEL) is set - restart backend after updating env vars
No alerts in UI
- seed data:
python3 demo/reset_demo.py - refresh
/ui
Port 8000 already in use
- stop existing process on
:8000, then rerun launcher