Run your benchmarks locally or in CI using your favorite
benchmarking tools. The bencher
CLI simply
wraps your existing benchmark harness and stores its
results.
Track your hyperfine
benchmarks
Track the results of your benchmarks over time with Bencher
How It Works
Run your benchmarks
Track your benchmarks
Track the results of your benchmarks over time. Monitor, query, and graph the results using the Bencher web console based on the source branch and testbed.
Catch performance regressions
Catch performance regressions in CI. Bencher uses state of the art, customizable analytics to detect performance regressions before they make it to production.
Track your hyperfine
benchmarks
Track the results of your hyperfine
benchmarks over time.
Monitor, query, and graph the results using the Bencher web console based on the source branch and testbed.
$ bencher run --file results.json hyperfine --export-json results.json 'sleep 0.3'
Benchmark 1: sleep 0.3
Time (mean Β± Ο): 311.0 ms Β± 3.7 ms [User: 1.1 ms, System: 2.5 ms]
Range (min β¦ max): 306.8 ms β¦ 316.5 ms 10 runs
Benchmark Harness Results:
{
"results": [
{
"command": "sleep 0.3",
"mean": 0.31100947182,
"stddev": 0.0036727203564211972,
"median": 0.31172578012,
"user": 0.00112766,
"system": 0.0025237199999999993,
"min": 0.30678948862,
"max": 0.31653086362,
"times": [
0.30732828062,
0.30698090562,
0.31075336362,
0.30678948862,
0.31275773762000003,
0.31653086362,
0.31551086362,
0.31322536262,
0.30751965562,
0.31269819662000004
],
"exit_codes": [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
]
}
]
}
Bencher New Report:
{
"branch": "main",
"end_time": "2024-03-10T20:34:00.981095Z",
"hash": "a7d081e048b77c5d4bfc0fb66f48cbf05cb49464",
"results": [ ... ],
"settings": {},
"start_time": "2024-03-10T20:33:57.722930Z",
"testbed": "localhost"
}
Catch Performance Regressions in CI
Catch performance regressions in CI and leave results as a comment on the pull requests. Fail the build! Keep your codebase fast and your users happy.
Bencher Report
Branch | 254/merge |
Testbed | ubuntu-latest |
π¨ 1 ALERT: Threshold Boundary Limit exceeded!
Benchmark | Measure Units | View | Benchmark Result (Result Ξ%) | Lower Boundary (Limit %) | Upper Boundary (Limit %) |
---|---|---|---|---|---|
Adapter::Json | Latency nanoseconds (ns) | π plot π¨ alert π· threshold | 3,445.60 (+1.52%) | 3,362.07 (102.48%) |
Click to view all benchmark results
Benchmark | Latency | Benchmark Results nanoseconds (ns) (Result Ξ%) | Upper Boundary nanoseconds (ns) (Limit %) |
---|---|---|---|
Adapter::Json | π view plot π¨ view alert π· view threshold | 3,445.60 (+1.52%) | 3,362.07 (102.48%) |
Adapter::Magic (JSON) | π view plot π· view threshold | 3,431.40 (+0.69%) | 3,596.95 (95.40%) |
Adapter::Magic (Rust) | π view plot π· view threshold | 22,095.00 (-0.83%) | 24,732.80 (89.33%) |
Adapter::Rust | π view plot π· view threshold | 2,305.70 (-2.76%) | 2,500.49 (92.21%) |
Adapter::RustBench | π view plot π· view threshold | 2,299.90 (-3.11%) | 2,503.41 (91.87%) |
π° View full continuous benchmarking report in Bencher
Hosting
Self-Hosted
Run Bencher on-prem or in your own cloud. Bencher can be deployed on a standalone server, in a Docker container, or as part of a Kubernetes cluster.
Bencher Cloud
It's 2024, who wants to manage yet another serviceβ½ Let us take care of that for you. All of the same great features with none of the hassle.
Share Your Benchmarks
All public projects have their own perf page. These results can easily be shared with an auto-updating perf image. Perfect for your README!
Track your benchmarks in CI
Have you ever had a performance regression impact your users? Bencher could have prevented that from happening with continuous benchmarking.