Quick Start
What is Bencher?
Bencher is a suite of continuous benchmarking tools. Have you ever had a performance regression impact your users? Bencher could have prevented that from happening. Bencher allows you to detect and prevent performance regressions before they make it to production.
- Run: Run your benchmarks locally or in CI using your favorite benchmarking tools. The
bencher
CLI simply wraps your existing benchmark harness and stores its results. - Track: Track the results of your benchmarks over time. Monitor, query, and graph the results using the Bencher web console based on the source branch, testbed, benchmark, and measure.
- Catch: Catch performance regressions in CI. Bencher uses state of the art, customizable analytics to detect performance regressions before they make it to production.
For the same reasons that unit tests are run in CI to prevent feature regressions, benchmarks should be run in CI with Bencher to prevent performance regressions. Performance bugs are bugs!
Install bencher
CLI
Select your operating system and run the provided command to install the bencher
CLI.
For more details, see the bencher
CLI install documentation.
⠀
curl --proto '=https' --tlsv1.2 -sSfL https://bencher.dev/download/install-cli.sh | sh
curl --proto '=https' --tlsv1.2 -sSfL https://bencher.dev/download/install-cli.sh | sh
cargo install --git https://github.com/bencherdev/bencher --branch main --locked --force bencher_cli
powershell -c "irm https://bencher.dev/download/install-cli.ps1 | iex"
⠀
Now, lets check that you have the bencher
CLI installed. Run:
bencher --version
bencher --version
bencher --version
bencher --version
You should see:
bencher 0.5.0
Select your Benchmark Harness
If you already have benchmarks written, select your programming language and benchmarking harness from the list below. Otherwise, just skip this step. For more details, see the benchmark harness adapters documentation.
Track your Benchmarks
You are now ready to track your benchmark results!
To do so, you will use the bencher run
CLI subcommand
to run your benchmarks and collect the results. Run:
⠀
bencher run "make benchmarks"
bencher run "make benchmarks"
bencher run "make benchmarks --benchmark_format=json"
bencher run "make benchmarks --benchmark_format=json"
bencher run "dotnet run -c Release"
bencher run "dotnet run -c Release"
⠀
bencher run "go test -bench"
bencher run "go test -bench"
bencher run --file results.json "java -jar benchmarks.jar -rf json -rff results.json"
bencher run --file results.json "java -jar benchmarks.jar -rf json -rff results.json"
bencher run "node benchmark.js"
bencher run "node benchmark.js"
bencher run "node benchmark.js"
bencher run "node benchmark.js"
bencher run "bencher mock"
bencher run "bencher mock"
bencher run "asv run"
bencher run "asv run"
bencher run --file results.json "pytest --benchmark-json results.json benchmarks.py"
bencher run --file results.json "pytest --benchmark-json results.json benchmarks.py"
bencher run "ruby benchmarks.rb"
bencher run "ruby benchmarks.rb"
bencher run "cargo +nightly bench"
bencher run "cargo +nightly bench"
bencher run "cargo bench"
bencher run "cargo bench"
bencher run "cargo bench"
bencher run "cargo bench"
bencher run "cargo bench"
bencher run "cargo bench"
bencher run --file results.json "hyperfine --export-json results.json 'sleep 0.1'"
bencher run --file results.json "hyperfine --export-json results.json 'sleep 0.1'"
You may need to modify the benchmark command to match your setup.
If you don’t have any benchmarks yet, you can just use the bencher mock
subcommand as your benchmark command to generate some mock data.
If everything works as expected, the end of the output should look something like this:
View results:- bencher::mock_0 (Latency): https://bencher.dev/perf/project-abc4567-wxyz123456789?branches=88d5192d-5cd1-47c6-a817-056e5968737c&heads=657a8ee9-1f30-49d4-bd9b-ceed02576d7e&testbeds=f3a5db46-a57e-4caf-b96e-f0c1111eaa67&benchmarks=f7022024-ae16-4782-8f0d-869d65a82930&measures=775999d3-d705-482f-acd8-41947f8e0fbc&start_time=1741390156000&end_time=1743982156000&report=709d3476-51a4-4939-9584-75d9a2c04c54- bencher::mock_1 (Latency): https://bencher.dev/perf/project-abc4567-wxyz123456789?branches=88d5192d-5cd1-47c6-a817-056e5968737c&heads=657a8ee9-1f30-49d4-bd9b-ceed02576d7e&testbeds=f3a5db46-a57e-4caf-b96e-f0c1111eaa67&benchmarks=7a823440-216f-482d-a05f-8bf75e865bba&measures=775999d3-d705-482f-acd8-41947f8e0fbc&start_time=1741390156000&end_time=1743982156000&report=709d3476-51a4-4939-9584-75d9a2c04c54- bencher::mock_2 (Latency): https://bencher.dev/perf/project-abc4567-wxyz123456789?branches=88d5192d-5cd1-47c6-a817-056e5968737c&heads=657a8ee9-1f30-49d4-bd9b-ceed02576d7e&testbeds=f3a5db46-a57e-4caf-b96e-f0c1111eaa67&benchmarks=8d9695ff-f352-4781-9561-3c69012fd9fe&measures=775999d3-d705-482f-acd8-41947f8e0fbc&start_time=1741390156000&end_time=1743982156000&report=709d3476-51a4-4939-9584-75d9a2c04c54- bencher::mock_3 (Latency): https://bencher.dev/perf/project-abc4567-wxyz123456789?branches=88d5192d-5cd1-47c6-a817-056e5968737c&heads=657a8ee9-1f30-49d4-bd9b-ceed02576d7e&testbeds=f3a5db46-a57e-4caf-b96e-f0c1111eaa67&benchmarks=8ef6e256-8084-4afe-a7cf-eaa46384c19d&measures=775999d3-d705-482f-acd8-41947f8e0fbc&start_time=1741390156000&end_time=1743982156000&report=709d3476-51a4-4939-9584-75d9a2c04c54- bencher::mock_4 (Latency): https://bencher.dev/perf/project-abc4567-wxyz123456789?branches=88d5192d-5cd1-47c6-a817-056e5968737c&heads=657a8ee9-1f30-49d4-bd9b-ceed02576d7e&testbeds=f3a5db46-a57e-4caf-b96e-f0c1111eaa67&benchmarks=1205e35a-c73b-4ff9-916c-40838a62ae0b&measures=775999d3-d705-482f-acd8-41947f8e0fbc&start_time=1741390156000&end_time=1743982156000&report=709d3476-51a4-4939-9584-75d9a2c04c54
Claim this project: https://bencher.dev/auth/signup?claim=d4b0cd5a-8422-40af-9872-8e18d5d062c4
You can now view the results for each of your benchmarks in the browser.
Click or copy and paste the links from View results
.
To claim these results, click or copy and paste the Claim this project
link into your browser.
🐰 Congrats! You tracked your first benchmark results! 🎉