How to track Rust Criterion benchmarks in CI
Everett Pompeii
Now that you have learned how to benchmark Rust code with Criterion, let’s see how to track those benchmarks in CI. Continuous Benchmarking is the practice of running benchmarks on every changeset to ensure the changes do not introduce a performance regressions. The easiest way to implement Continuous Benchmarking with Criterion benchmarks is to use Bencher.
What is Bencher?
Bencher is a suite of continuous benchmarking tools. Have you ever had a performance regression impact your users? Bencher could have prevented that from happening. Bencher allows you to detect and prevent performance regressions before they make it to production.
- Run: Run your benchmarks locally or in CI using your favorite benchmarking tools. The
bencher
CLI simply wraps your existing benchmark harness and stores its results. - Track: Track the results of your benchmarks over time. Monitor, query, and graph the results using the Bencher web console based on the source branch, testbed, benchmark, and measure.
- Catch: Catch performance regressions in CI. Bencher uses state of the art, customizable analytics to detect performance regressions before they make it to production.
For the same reasons that unit tests are run in CI to prevent feature regressions, benchmarks should be run in CI with Bencher to prevent performance regressions. Performance bugs are bugs!
Steps for Bencher Cloud
- Create a Bencher Cloud account.
- Create an API token and add it to your CI as a secret.
- Create a workflow for your CI, like GitHub Actions or GitLab CI/CD.
- Install the Bencher CLI in your CI workflow.
-
Run your benchmarks with the
bencher run
subcommand in your CI workflow using therust_criterion
adapter.
Steps for Bencher Self-Hosted
- Create a Bencher Self-Hosted instance.
- Create an account on your Bencher Self-Hosted instance.
- Create an API token and add it to your CI as a secret.
- Create a workflow for your CI, like GitHub Actions or GitLab CI/CD.
- Install the Bencher CLI in your CI workflow. Make sure the CLI version matches the version of your Bencher Self-Hosted instance.
-
Run your benchmarks with the
bencher run
subcommand in your CI workflow using therust_criterion
adapter and setting the--host
option to your Bencher Self-Hosted instance URL.
🦀 Rust Criterion
The Rust Criterion Adapter (rust_criterion
) expects Criterion output.
The latency
Measure (ie nanoseconds (ns)
) is gathered.
The lower_value
and upper_value
are the lower and upper bounds of either the slope (if available) or the mean (if not) (ie value
) respectively.
Track your benchmarks in CI
Have you ever had a performance regression impact your users? Bencher could have prevented that from happening with continuous benchmarking.