CONTINUOUS BENCHMARKING

Catch performance regressions before they merge

Bencher runs on bare metal to eliminate noise and deliver benchmark results you can actually trust.

0 5 10 15 20 25 ↑ microseconds (μs) !

TRUSTED BY ENGINEERING TEAMS AT

THE PROBLEM

CI benchmarks are too noisy to trust

If you can't tell a real regression from noise, the data is worthless. So teams stop looking, and performance regressions silently ship.

CI runners are shared and unpredictable

Shared CPU, memory contention, and scheduler jitter mean two identical benchmark runs can return wildly different results.

Noisy results train you to ignore alerts

When benchmarks cry wolf on every PR, engineers stop looking. By the time a real performance regression lands, nobody catches it.

Performance regressions silently ship

Without trustworthy benchmark results in the PR workflow, performance regressions reach production. You find out when users do.

WHY BARE METAL

All signal. No noise.

CI runners introduce enough noise to mask real performance regressions. Bencher Bare Metal isolates your benchmarks on dedicated bare metal hardware, so when a number changes, it means something.

TYPICAL CI RUNNER VARIANCE

>30%

Noisy shared CI runners hide real performance regressions.

BENCHER BARE METAL VARIANCE

<2%

Dedicated bare metal hardware. When a number moves, it's real.

HOW IT WORKS

Run locally. Run in CI. Same bare metal every time.

01

Run your benchmarks

Run your benchmarks locally or in CI using the exact same bare metal runners and your favorite benchmarking tools. The bencher CLI orchestrates running your benchmarks on bare metal and stores the results.

02

Track your benchmarks

Track the results of your benchmarks over time. Monitor, query, and graph the results using the bencher CLI and the Bencher web console based on the source branch and testbed.

03

Catch performance regressions

Catch performance regressions locally or in CI using the exact same bare metal hardware. Bencher uses state of the art, customizable analytics to detect performance regressions before they merge.

IN DEVELOPMENT

Catch performance regressions before you even push

The bencher CLI runs anywhere your code does. Catch regressions at the source, before you even open a PR. The CLI and API are agent-ready, your coding agents can benchmark on bare metal without you in the loop.

IN REVIEW

Catch performance regressions in code review, not in production

No dashboards to remember to check. No manual benchmark runs. Every benchmark run lands as a PR comment. Regressions fail the build.

WHAT ENGINEERS SAY

Performance coverage for your PRs

Bencher is like CodeCov for performance metrics.
Jonathan Woollett-Light
Jonathan Woollett-Light @JonathanWoollett-Light
Now that I'm starting to see graphs of performance over time automatically from tests I'm running in CI. It's like this whole branch of errors can be caught and noticed sooner.
Price Clark
Price Clark @gpwclark
95% of the time I don't want to think about my benchmarks. But when I need to, Bencher ensures that I have the detailed historical record waiting there for me. It's fire-and-forget.
Joe Neeman
Joe Neeman @jneem

BENCHMARK HARNESSES

Bring your own benchmark harness

Rust

  • Criterion
  • libtest bench
  • Iai
  • Gungraun

C++

  • Google Benchmark
  • Catch2

Python

  • pytest-benchmark
  • airspeed velocity

Java

  • JMH

C#

  • BenchmarkDotNet

JavaScript

  • Benchmark.js

Go

  • go test -bench

Ruby

  • Benchmark

Dart

  • benchmark_harness

Shell

  • Hyperfine

Don't see your harness? Open an issue →

DEPLOYMENT

Your infrastructure, or ours.

Open Source

Self-Hosted

Deploy Bencher on your own infrastructure. Bare metal, Docker, or Kubernetes. Full control, no data leaving your environment.

Deploy in 60 seconds

Your next performance regression won't announce itself

Catch it in review, or pay for it in production.