Catch Performance Regressions in CI

Detect and prevent performance regressions before they make it to production with continuous benchmarking

How It Works

Run your benchmarks

Run your benchmarks locally or in CI using your favorite benchmarking tools. The bencher CLI simply wraps your existing benchmark harness and stores its results.

Track your benchmarks

Track the results of your benchmarks over time. Monitor, query, and graph the results using the Bencher web console based on the source branch and testbed.

Catch performance regressions

Catch performance regressions in CI. Bencher uses state of the art, customizable analytics to detect performance regressions before they make it to production.

You are in good company

Catch Performance Regressions in CI

Catch performance regressions in CI and leave results as a comment on the pull requests. Fail the build! Keep your codebase fast and your users happy.


ReportTue, December 5, 2023 at 00:16:53 UTC
BenchmarkLatencyLatency Results
nanoseconds (ns) | (Ξ”%)
Latency Upper Boundary
nanoseconds (ns) | (%)
Adapter::Json🚨 (view plot | view alert)3445.600 (+1.52%)3362.079 (102.48%)
Adapter::Magic (JSON)βœ… (view plot)3431.400 (+0.69%)3596.950 (95.40%)
Adapter::Magic (Rust)βœ… (view plot)22095.000 (-0.83%)24732.801 (89.33%)
Adapter::Rustβœ… (view plot)2305.700 (-2.76%)2500.499 (92.21%)
Adapter::RustBenchβœ… (view plot)2299.900 (-3.11%)2503.419 (91.87%)

Bencher - Continuous Benchmarking
View Public Perf Page
Docs | Repo | Chat | Help

Use Your Favorite Benchmark Harness



Run Bencher on-prem or in your own cloud. Bencher can be deployed on a standalone server, in a Docker container, or as part of a Kubernetes cluster.

Learn More

Bencher Cloud

It's 2024, who wants to manage yet another serviceβ€½ Let us take care of that for you. All of the same great features with none of the hassle.

Get Started

Share Your Benchmarks

All public projects have their own perf page. These results can easily be shared with an auto-updating perf image. Perfect for your README!