Skip to content

asaiyu/SIEM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SIEM Demo

A local, deterministic SIEM demo application that ingests synthetic security telemetry, runs correlation detections, and supports AI-assisted alert triage.

Objective

This project is meant to demonstrate an end-to-end SOC workflow in a small local stack:

  • generate realistic synthetic events (Windows, macOS, DNS, firewall)
  • normalize raw payloads into a canonical schema
  • run deterministic detections to create alerts
  • inspect alerts in a lightweight UI
  • run AI triage that cites evidence and saves case notes

It is a demo and learning project, not a production SIEM.

Architecture

The app is split into four parts:

  • backend/: FastAPI app + SQLite storage + normalization + detection engine + triage endpoint
  • frontend/: static HTML/CSS/JS UI served by backend at /ui
  • demo/: synthetic data seeding, reset, one-command launcher, smoke test
  • mcp_server/: MCP-compatible tool implementations (and optional standalone MCP server)

Main runtime flow:

  1. Synthetic events are inserted into SQLite.
  2. Detection rules evaluate event streams and create alerts.
  3. UI reads /alerts and alert detail endpoints.
  4. Triage endpoint gathers evidence via MCP tools and calls an LLM.
  5. Triage output is validated and persisted as a case note.

Repo Layout

SIEM/
  backend/
    app/
    data/
    prompts/
  frontend/
  demo/
    run_demo.py
    reset_demo.py
    seed_scenarios.py
    smoke_test.sh
  mcp_server/

Requirements

  • Python 3.9+
  • curl and sqlite3 (for smoke test)
  • Optional: jq (improves smoke test parsing)
  • Optional: OpenAI-compatible API credentials for AI triage

Build / Install

From repo root (SIEM/):

python3 -m venv .venv
source .venv/bin/activate
pip install -r backend/requirements.txt -r mcp_server/requirements.txt

Run The App (One Command)

From repo root:

python3 demo/run_demo.py

By default, this command:

  • starts backend on 127.0.0.1:8000 (or reuses an existing healthy backend)
  • resets demo tables and seeds initial synthetic data
  • keeps seeding synthetic data every 45 seconds

UI URL:

  • http://127.0.0.1:8000/ui

Useful Launch Options

# seed once, disable continuous seeding
python3 demo/run_demo.py --seed-interval-seconds 0

# keep existing DB rows, still run app
python3 demo/run_demo.py --no-reset-first

# enable backend autoreload for code changes
python3 demo/run_demo.py --reload

# combine options
python3 demo/run_demo.py --no-reset-first --reload --seed-interval-seconds 0

Stop The App / Seeding

  • Press Ctrl+C in the terminal running run_demo.py.

AI Triage Setup

AI triage (POST /alerts/{id}/triage or “Run Triage” in UI) needs an API key and model.

Set environment variables before starting backend:

export OPENAI_API_KEY="sk-..."
export OPENAI_MODEL="gpt-4.1-mini"
python3 demo/run_demo.py --no-reset-first --reload

Optional overrides:

  • SIEM_LLM_API_KEY (preferred override over OPENAI_API_KEY)
  • SIEM_LLM_MODEL (preferred override over OPENAI_MODEL)
  • SIEM_LLM_BASE_URL (default: https://api.openai.com/v1)
  • SIEM_LLM_TIMEOUT_SECONDS (default: 45)
  • SIEM_TRIAGE_PROMPT_PATH (custom prompt file)

Note: if a backend is already running, run_demo.py reuses it. If you changed env vars, restart backend so it picks them up.

API Endpoints

Core endpoints:

  • GET /health
  • POST /ingest
  • GET /events?limit=...
  • GET /alerts?limit=...
  • GET /alerts/{alert_id}
  • GET /alerts/{alert_id}/case-note/latest
  • POST /alerts/{alert_id}/triage

UI endpoints:

  • GET /ui
  • GET /ui/alerts/{alert_id}
  • static assets under /ui-static/*

Detection Rules Included

Current deterministic rules:

  • demo.office_powershell_encoded
  • demo.password_spray_success
  • demo.macos_sudo_after_ssh
  • demo.suspicious_dns_outbound_connect

Data Storage

SQLite database path:

  • backend/data/siem_demo.db

Primary tables:

  • events
  • alerts
  • case_notes
  • tool_calls
  • audit_log

Smoke Test

Run an end-to-end check (health, alerts, triage output, tool call logging, case note persistence):

bash demo/smoke_test.sh

The smoke test expects:

  • backend running on localhost:8000 (or overridden env vars)
  • seeded data with at least one alert
  • valid LLM credentials if triage is enabled

Optional: Standalone MCP Server

The backend triage path calls tool functions directly in-process.
You can also run the MCP server separately (stdio transport):

python -m mcp_server.server

Exposed tools:

  • get_alert(alert_id)
  • search_events(tenant_id, query, time_start, time_end, limit)
  • retrieve_runbook(rule_id)
  • create_case_note(alert_id, content, summary)

Troubleshooting

Triage failed: Missing LLM API key

  • ensure OPENAI_API_KEY (or SIEM_LLM_API_KEY) is set
  • ensure OPENAI_MODEL (or SIEM_LLM_MODEL) is set
  • restart backend after updating env vars

No alerts in UI

  • seed data: python3 demo/reset_demo.py
  • refresh /ui

Port 8000 already in use

  • stop existing process on :8000, then rerun launcher

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors