This guide provides comprehensive debugging techniques and tools to help troubleshoot Visor configurations, checks, and transformations.
- Running Visor Locally
- Debug Mode
- Debugging JavaScript Expressions
- Debugging Liquid Templates
- Using the Logger Check
- Common Debugging Patterns
- Author Permission Functions
- Troubleshooting Tips
- Tracing with OpenTelemetry
- Debug Visualizer
# Install dependencies
npm install
# Build the project
npm run build
# Run the CLI
./dist/cli-main.js --help
# or
./dist/index.js --help# Run with a config file
./dist/index.js --config ./examples/calculator-config.yaml
# Run specific checks only
./dist/index.js --config .visor.yaml --check security,lint
# Run with debug output
./dist/index.js --config .visor.yaml --debug
# Output in different formats
./dist/index.js --config .visor.yaml --output json
./dist/index.js --config .visor.yaml --output markdown
./dist/index.js --config .visor.yaml --output sarif
# Pass inline messages for human-input checks
./dist/index.js --config ./examples/calculator-config.yaml --message "42"The --tui flag enables a persistent terminal interface for any workflow. The workflow runs immediately, and you can re-run it by typing new messages after completion:
# Start with TUI mode
./dist/index.js --tui --config ./examples/calculator-config.yaml
# TUI with debug output (logs go to second tab)
./dist/index.js --tui --config .visor.yaml --debugTUI Features:
- Chat Tab: Shows workflow prompts and results in a chat-like interface
- Logs Tab: Press
Shift+Tabor2to switch to logs view - Traces Tab: Real-time OpenTelemetry trace visualization with execution tree
- Persistent Input: Type messages at any time to interact with the workflow
- Re-run Workflows: After completion, type a new message to re-run
TUI Key Bindings:
| Key | Action |
|---|---|
Enter |
Submit input |
Shift+Tab |
Cycle between Chat, Logs, and Traces tabs |
1 / 2 / 3 |
Switch to Chat / Logs / Traces tab directly |
e |
Toggle engine state visibility (Traces tab only) |
Escape |
Clear input |
Ctrl+C |
Exit / Abort workflow |
q |
Exit (when workflow is complete) |
Traces Tab Features:
- Real-time execution tree showing check hierarchy
- forEach iterations grouped under parent check with index
- IN/OUT/ERR lines showing inputs, outputs, and errors for each span
- Press
eto toggle engine state spans (LevelDispatch, WavePlanning, etc.) - Engine states hidden by default to focus on your checks
The debug server provides a web-based UI for stepping through workflow execution:
# Start with debug server
./dist/index.js --config .visor.yaml --debug-server --debug-port 3456
# For headless/CI environments (skip auto-opening browser)
VISOR_NOBROWSER=true ./dist/index.js --config .visor.yaml --debug-serverOpen http://localhost:3456 to view the visual debugger. You can:
- Click "Start" to begin execution
- Pause/resume workflow execution
- View spans and timing information
- See check outputs and errors
# TUI + Debug mode (verbose logging in logs tab)
./dist/index.js --tui --config .visor.yaml --debug
# Debug server + Debug mode (full visibility)
./dist/index.js --config .visor.yaml --debug-server --debug
# Full tracing with Grafana LGTM (or any OTLP-compatible backend)
VISOR_TELEMETRY_ENABLED=true \
VISOR_TELEMETRY_SINK=otlp \
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 \
./dist/index.js --config .visor.yaml --debug-
Use TUI for interactive workflows: When developing workflows with human-input checks, TUI mode provides the best experience.
-
Check logs and traces tabs: In TUI mode, press
Shift+Tabto cycle tabs. Use the Logs tab for detailed execution logs and the Traces tab for visual execution flow. -
Use JSON output for debugging:
--output jsongives you the full result structure to inspect. -
Watch mode for rapid iteration:
# In one terminal - watch and rebuild npm run build -- --watch # In another terminal - run your workflow ./dist/index.js --tui --config ./my-workflow.yaml
-
Run tests for specific features:
npm test -- --testPathPattern="human-input" npm test -- --testPathPattern="memory"
Enable debug mode to see detailed execution information:
# CLI
visor --check all --debug
# GitHub Action
- uses: probelabs/visor-action@v1
with:
debug: trueDebug mode provides:
- Detailed AI provider interactions
- Template rendering details
- Expression evaluation results
- Dependency resolution paths
- Error stack traces
The log() function is available in JavaScript expressions for debugging:
steps:
analyze-bugs:
type: ai
depends_on: [fetch-tickets]
if: |
log("Full outputs object:", outputs);
log("Ticket data:", outputs["fetch-tickets"]);
log("Issue type:", outputs["fetch-tickets"]?.issueType);
outputs["fetch-tickets"]?.issueType === "Bug"
prompt: "Analyze this bug"steps:
security-check:
type: ai
prompt: "Check for security issues"
fail_if: |
log("Checking issues:", output.issues);
log("Critical count:", output.issues.filter(i => i.severity === "critical").length);
output.issues.filter(i => i.severity === "critical").length > 0steps:
process-data:
type: command
exec: curl -s https://api.example.com/data
transform_js: |
log("Raw response:", output);
// Parse JSON with error handling
let data;
try {
data = JSON.parse(output);
log("Parsed successfully:", data);
} catch (e) {
log("Parse error:", e.message);
return { error: e.message };
}
// Transform the data
const transformed = data.items.map(item => ({
id: item.id,
score: item.metrics.score
}));
log("Transformed result:", transformed);
return transformed;The log() function prefixes output with 🔍 for easy identification:
🔍 Debug: Full outputs object: { 'fetch-tickets': { issueType: 'Bug', priority: 'High' } }
🔍 Debug: Issue type: Bug
The json filter is invaluable for inspecting data structures:
steps:
debug-template:
type: log
message: |
=== Debug Information ===
PR Context:
{{ pr | json }}
Dependency Outputs:
{{ outputs | json }}
Environment:
{{ env | json }}
Files Changed:
{{ files | json }}# Check if a variable exists
{% if outputs.fetch-tickets %}
Tickets found: {{ outputs.fetch-tickets | json }}
{% else %}
No tickets data available
{% endif %}
# Debug array access
{% for item in outputs.fetch-items %}
Item {{ forloop.index }}: {{ item | json }}
{% endfor %}
# Debug nested access
Nested value: {{ outputs["complex-check"]["data"]["nested"]["value"] | default: "Not found" }}The logger check type is designed for debugging workflows:
steps:
debug-dependencies:
type: logger
depends_on: [fetch-data, process-data]
message: |
=== Debugging Dependency Flow ===
Fetch Data Output:
{{ outputs["fetch-data"] | json }}
Processed Data:
{{ outputs["process-data"] | json }}
PR Number: {{ pr.number }}
Files Count: {{ files | size }}
level: info # info, warning, error, debug
include_dependencies: true
include_pr_context: true
include_metadata: true| Option | Description | Default |
|---|---|---|
message |
Liquid template for the log message | Required |
level |
Log level: debug, info, warning, error | info |
include_dependencies |
Include dependency results | true |
include_pr_context |
Include PR information | true |
include_metadata |
Include execution metadata | true |
steps:
fetch-items:
type: command
exec: echo '[{"id":1,"name":"A"},{"id":2,"name":"B"}]'
transform_js: |
const items = JSON.parse(output);
log("Total items:", items.length);
items.forEach((item, index) => {
log(`Item ${index}:`, item);
});
return items;
forEach: true
process-item:
type: logger
depends_on: [fetch-items]
message: |
Processing item: {{ outputs["fetch-items"] | json }}
All processed so far: {{ outputs.history["fetch-items"] | json }}Note: Use outputs.history['check-name'] to access all previous iteration outputs. See Output History for tracking outputs across loop iterations and forEach processing.
Note on forEach outputs: When a check uses forEach, its output is automatically unwrapped in both templates and JavaScript contexts, giving you direct access to the array. This makes it easier to work with the data:
steps:
analyze-tickets:
type: command
depends_on: [fetch-tickets]
if: |
// Direct access to the array from forEach check
log("Tickets:", outputs["fetch-tickets"]);
outputs["fetch-tickets"].some(t => t.issueType === "Bug")
exec: echo "Processing bugs..."steps:
conditional-check:
type: command
exec: echo "test"
if: |
// Debug all available context
log("Event:", event);
log("Branch:", branch);
log("Files changed:", filesChanged);
log("Outputs available:", Object.keys(outputs));
// Complex condition with debugging
const shouldRun = branch === "main" && filesChanged.length > 0;
log("Should run?", shouldRun);
return shouldRun;steps:
fetch-raw:
type: command
exec: curl -s https://api.example.com/data
transform_js: |
log("Step 1 - Raw:", output.substring(0, 100));
return output;
parse-json:
type: command
depends_on: [fetch-raw]
exec: echo '{{ outputs["fetch-raw"] }}'
transform_js: |
log("Step 2 - Input:", output.substring(0, 100));
const parsed = JSON.parse(output);
log("Step 2 - Parsed:", parsed);
return parsed;
extract-data:
type: logger
depends_on: [parse-json]
message: |
Final data: {{ outputs["parse-json"] | json }}steps:
debug-ai-context:
type: logger
depends_on: [fetch-context]
message: |
=== AI Prompt Context ===
Context data: {{ outputs["fetch-context"] | json }}
Files to analyze: {{ files | size }}
{% for file in files %}
- {{ file.path }}: {{ file.additions }} additions, {{ file.deletions }} deletions
{% endfor %}
ai-analysis:
type: ai
depends_on: [debug-ai-context, fetch-context]
prompt: |
Analyze the following data:
{{ outputs["fetch-context"] | json }}When outputs access fails, debug the structure:
steps:
debug-outputs:
type: command
depends_on: [previous-check]
exec: echo "debugging"
transform_js: |
log("All outputs:", outputs);
log("Output keys:", Object.keys(outputs));
log("Previous check type:", typeof outputs["previous-check"]);
log("Is array?", Array.isArray(outputs["previous-check"]));
// Debug output history
log("History available:", !!outputs.history);
log("History keys:", Object.keys(outputs.history || {}));
log("Previous check history length:", outputs.history["previous-check"]?.length);
return "debug complete";Tip: Use outputs for current values and outputs.history to see all previous values from loop iterations or retries. See Output History for more details.
transform_js: |
log("Raw output type:", typeof output);
log("First 50 chars:", output.substring(0, 50));
// Safe JSON parsing
try {
const data = JSON.parse(output);
log("Parse successful");
return data;
} catch (e) {
log("Parse failed:", e.message);
log("Invalid JSON:", output);
return { error: "Invalid JSON", raw: output };
}steps:
debug-env:
type: logger
message: |
Environment Variables:
{% for key in env %}
- {{ key }}: {{ env[key] }}
{% endfor %}
GitHub Context:
- Event: {{ event.event_name }}
- Action: {{ event.action }}
- Repository: {{ event.repository }}steps:
debug-files:
type: command
exec: echo "checking files"
if: |
const jsFiles = filesChanged.filter(f => f.endsWith('.js'));
const tsFiles = filesChanged.filter(f => f.endsWith('.ts'));
log("JS files:", jsFiles);
log("TS files:", tsFiles);
log("Has source changes:", jsFiles.length > 0 || tsFiles.length > 0);
return jsFiles.length > 0 || tsFiles.length > 0;steps:
validate-output:
type: command
exec: echo '{"items":[1,2,3]}'
transform_js: |
const data = JSON.parse(output);
// Validate structure
log("Has items?", "items" in data);
log("Items is array?", Array.isArray(data.items));
log("Items count:", data.items?.length);
if (!data.items || !Array.isArray(data.items)) {
log("Invalid structure:", data);
throw new Error("Expected items array");
}
return data.items;
schema:
type: array
items:
type: number- Use Progressive Debugging: Start with high-level logs, then add more detail as needed
- Clean Up Logs: Remove or comment out
log()calls in production configs - Log at Boundaries: Add logs at the start/end of transforms and conditions
- Include Context: Log not just values but also their types and structures
- Use Structured Output: Return objects with error details rather than throwing errors
Set these environment variables for additional debug output:
# Enable verbose debug output (used in diff processing and other internals)
export DEBUG=1
# or
export VERBOSE=1
# Enable telemetry and tracing
export VISOR_TELEMETRY_ENABLED=true
export VISOR_TELEMETRY_SINK=file # or otlp, console
# Set trace output directory
export VISOR_TRACE_DIR=output/traces
# For headless/CI environments (skip auto-opening browser)
export VISOR_NOBROWSER=trueSee Telemetry Setup for detailed configuration of tracing and metrics.
# Wrong - check has no dependencies
steps:
my-check:
type: command
exec: echo "{{ outputs.other }}" # Error: outputs is undefined
# Correct - add depends_on
steps:
my-check:
type: command
depends_on: [other]
exec: echo "{{ outputs.other }}" # Now outputs is available# Debug the structure first
transform_js: |
log("Output structure:", output);
log("Has data property?", output && output.data !== undefined);
// Safe access with optional chaining
const value = output?.data?.items?.[0]?.value;
log("Extracted value:", value);
return value || "default";# Debug the expression step by step
if: |
log("Step 1 - outputs exists:", outputs !== undefined);
log("Step 2 - has key:", "my-check" in outputs);
log("Step 3 - value:", outputs["my-check"]);
// Break complex expressions into steps
const hasData = outputs && outputs["my-check"];
const isValid = hasData && outputs["my-check"].status === "success";
log("Final result:", isValid);
return isValid;📖 For complete documentation, examples, and best practices, see Author Permissions Guide
Visor provides helper functions to check the PR author's permission level in JavaScript expressions (if, fail_if, transform_js). These functions use GitHub's author_association field.
From highest to lowest privilege:
- OWNER - Repository owner
- MEMBER - Organization member
- COLLABORATOR - Invited collaborator
- CONTRIBUTOR - Has contributed before
- FIRST_TIME_CONTRIBUTOR - First PR to this repo
- FIRST_TIMER - First GitHub contribution ever
- NONE - No association
Check if author has at least the specified permission level (>= logic):
steps:
# Run security scan for external contributors only
security-scan:
type: command
exec: npm run security-scan
if: "!hasMinPermission('MEMBER')" # Not owner or member
# Auto-approve for trusted contributors
auto-approve:
type: command
exec: gh pr review --approve
if: "hasMinPermission('COLLABORATOR')" # Collaborators and aboveBoolean checks for specific or hierarchical permission levels:
steps:
# Different workflows based on permission
code-review:
type: ai
prompt: "Review code"
if: |
log("Author is owner:", isOwner());
log("Author is member:", isMember());
log("Author is collaborator:", isCollaborator());
// Members can skip review
!isMember()
# Block sensitive file changes from non-members
sensitive-files-check:
type: command
exec: echo "Checking sensitive files..."
fail_if: |
!isMember() && files.some(f =>
f.filename.startsWith('secrets/') ||
f.filename === '.env' ||
f.filename.endsWith('.key')
)Check if author is a first-time contributor:
steps:
welcome-message:
type: command
exec: gh pr comment --body "Welcome to the project!"
if: "isFirstTimer()"
require-review:
type: command
exec: gh pr review --request-changes
fail_if: "isFirstTimer() && outputs.issues?.length > 5"When running locally (not in GitHub Actions):
- All permission checks return
true(treated as owner) isFirstTimer()returnsfalse- This prevents blocking local development/testing
steps:
# Run expensive security scan only for external contributors
deep-security-scan:
type: command
exec: npm run security-scan:deep
if: "!hasMinPermission('MEMBER')"
# Quick scan for trusted members
quick-security-scan:
type: command
exec: npm run security-scan:quick
if: "hasMinPermission('MEMBER')"steps:
require-approval:
type: command
exec: gh pr review --request-changes
fail_if: |
// First-timers need clean PRs
(isFirstTimer() && totalIssues > 0) ||
// Non-collaborators need approval for large changes
(!hasMinPermission('COLLABORATOR') && pr.totalAdditions > 500)steps:
auto-merge:
type: command
depends_on: [tests, lint, security-scan]
exec: gh pr merge --auto --squash
if: |
// Only auto-merge for collaborators with passing checks
hasMinPermission('COLLABORATOR') &&
outputs.tests.error === false &&
outputs.lint.error === false &&
outputs["security-scan"].criticalIssues === 0Visor supports OpenTelemetry tracing for deep execution visibility. Enable tracing to see:
- Root span:
visor.run- one per CLI/Slack execution - State spans:
engine.state.*withwave,wave_kind,session_idattributes - Check spans:
visor.check.<checkId>withvisor.check.id,visor.check.type,visor.foreach.index(for map fanout) - Routing decisions:
visor.routingevents withtrigger,action,source,target,scope,goto_event - Wave visibility:
engine.state.level_dispatchincludeslevel_sizeandlevel_checks_preview
The recommended local observability stack is Grafana LGTM — a single Docker container bundling Grafana, Tempo (traces), Loki (logs), Prometheus (metrics), and an OpenTelemetry Collector:
# Start Grafana LGTM locally (traces + logs + metrics in one container)
docker run -d --name grafana-otel \
-p 3000:3000 \
-p 4317:4317 \
-p 4318:4318 \
-v grafana-otel-data:/data \
grafana/otel-lgtm:latest
# Run Visor with tracing enabled
VISOR_TELEMETRY_ENABLED=true \
VISOR_TELEMETRY_SINK=otlp \
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 \
visor --config .visor.yaml
# View traces, logs, and metrics at http://localhost:3000
# Default credentials: admin / adminFor complete tracing setup and configuration, see Telemetry Setup.
Visor includes a built-in debug visualizer - a lightweight HTTP server that streams OpenTelemetry spans during execution and provides control endpoints for pause/resume/stop.
# Start with debug server
visor --config .visor.yaml --debug-server --debug-port 3456
# For CI/headless environments
VISOR_NOBROWSER=true visor --config .visor.yaml --debug-server --debug-port 3456GET /api/status- Execution state and readinessGET /api/spans- Current in-memory spans (live view)POST /api/start- Begin executionPOST /api/pause- Pause scheduling (in-flight work continues)POST /api/resume- Resume schedulingPOST /api/stop- Stop scheduling new workPOST /api/reset- Clear spans and return to idle
For complete debug visualizer documentation, see Debug Visualizer.
- Liquid Templates Guide - Template syntax and variables
- Command Provider Documentation - Command execution and transforms
- Configuration Reference - Full configuration options
- Telemetry Setup - OpenTelemetry tracing and metrics
- Debug Visualizer - Live execution visualization
- Output History - Tracking outputs across loop iterations