Track your hyperfine benchmarks

Track the results of your benchmarks over time with Bencher

HOW IT WORKS

Run locally. Run in CI. Same bare metal every time.

01

Run your benchmarks

Run your benchmarks locally or in CI using the exact same bare metal runners and your favorite benchmarking tools. The bencher CLI orchestrates running your benchmarks on bare metal and stores the results.

02

Track your benchmarks

Track the results of your benchmarks over time. Monitor, query, and graph the results using the bencher CLI and the Bencher web console based on the source branch and testbed.

03

Catch performance regressions

Catch performance regressions locally or in CI using the exact same bare metal hardware. Bencher uses state of the art, customizable analytics to detect performance regressions before they merge.

You are in good company


Track your hyperfine benchmarks

Track the results of your hyperfine benchmarks over time. Monitor, query, and graph the results using the Bencher web console based on the source branch and testbed.

$ bencher run --file results.json hyperfine --export-json results.json 'sleep 0.3'
Benchmark 1: sleep 0.3
  Time (mean ± σ):     311.0 ms ±   3.7 ms    [User: 1.1 ms, System: 2.5 ms]
  Range (min … max):   306.8 ms … 316.5 ms    10 runs

IN REVIEW

Catch performance regressions in code review, not in production

No dashboards to remember to check. No manual benchmark runs. Every benchmark run lands as a PR comment. Regressions fail the build.

Getting started with Bencher is simple



Terminal window

DEPLOYMENT

Your infrastructure, or ours.

Open Source

Self-Hosted

Deploy Bencher on your own infrastructure. Bare metal, Docker, or Kubernetes. Full control, no data leaving your environment.

Deploy in 60 seconds

Share Your Benchmarks

All public projects have their own perf page. These results can easily be shared with an auto-updating perf image. Perfect for your README!

Your next performance regression won't announce itself

Catch it in review, or pay for it in production.