Skip to content

Runs Google Benchmark repeatedly until results converge, ensuring stable, noise-free performance measurements for reliable comparisons.

License

Notifications You must be signed in to change notification settings

sbstndb/meta-benchmark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Meta Benchmark

Run Google Benchmark binaries with statistical meta-repetitions. Automatically re-runs only unstable cases until results stabilize.

Installation

pip install meta-benchmark

# With CPU affinity support
pip install meta-benchmark[affinity]

Quick Start

meta-benchmark --exe ./build/my_benchmarks

How It Works

  1. Run each benchmark for --min-meta-reps (default: 5)
  2. Check if 95% CI half-width ≤ --rel-ci-threshold (default: 3%)
  3. Re-run only unstable cases
  4. Stop when all stable or --max-meta-reps reached

Key Options

Option Default Description
--exe required Benchmark executable
--min-meta-reps 5 Min runs before stability check
--max-meta-reps 30 Max total runs
--rel-ci-threshold 0.03 Target CI width (3%)
--pin-core - Pin to CPU core
--output meta_results.json Output file

Full CLI reference: docs/usage.md

Context

This project is part of a tooling initiative for the CMAP / hpc@maths team at École Polytechnique.

License

MIT

About

Runs Google Benchmark repeatedly until results converge, ensuring stable, noise-free performance measurements for reliable comparisons.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •