Run Google Benchmark binaries with statistical meta-repetitions. Automatically re-runs only unstable cases until results stabilize.
pip install meta-benchmark
# With CPU affinity support
pip install meta-benchmark[affinity]meta-benchmark --exe ./build/my_benchmarks- Run each benchmark for
--min-meta-reps(default: 5) - Check if 95% CI half-width ≤
--rel-ci-threshold(default: 3%) - Re-run only unstable cases
- Stop when all stable or
--max-meta-repsreached
| Option | Default | Description |
|---|---|---|
--exe |
required | Benchmark executable |
--min-meta-reps |
5 | Min runs before stability check |
--max-meta-reps |
30 | Max total runs |
--rel-ci-threshold |
0.03 | Target CI width (3%) |
--pin-core |
- | Pin to CPU core |
--output |
meta_results.json | Output file |
Full CLI reference: docs/usage.md
This project is part of a tooling initiative for the CMAP / hpc@maths team at École Polytechnique.
MIT