How to track Rust Criterion benchmarks in CI

Everett Pompeii

Everett Pompeii


Now that you have learned how to benchmark Rust code with Criterion, let’s see how to track those benchmarks in CI. Continuous Benchmarking is the practice of running benchmarks on every changeset to ensure the changes do not introduce a performance regressions. The easiest way to implement Continuous Benchmarking with Criterion benchmarks is to use Bencher.

What is Bencher?

Bencher is a suite of continuous benchmarking tools. Have you ever had a performance regression impact your users? Bencher could have prevented that from happening. Bencher allows you to detect and prevent performance regressions before they make it to production.

  • Run: Run your benchmarks locally or in CI using your favorite benchmarking tools. The bencher CLI simply wraps your existing benchmark harness and stores its results.
  • Track: Track the results of your benchmarks over time. Monitor, query, and graph the results using the Bencher web console based on the source branch, testbed, benchmark, and measure.
  • Catch: Catch performance regressions in CI. Bencher uses state of the art, customizable analytics to detect performance regressions before they make it to production.

For the same reasons that unit tests are run in CI to prevent feature regressions, benchmarks should be run in CI with Bencher to prevent performance regressions. Performance bugs are bugs!

Steps for Bencher Cloud

  1. Create a Bencher Cloud account.
  2. Create an API token and add it to your CI as a secret.
  3. Create a workflow for your CI, like GitHub Actions or GitLab CI/CD.
  4. Install the Bencher CLI in your CI workflow.
  5. Run your benchmarks with the bencher run subcommand in your CI workflow using the rust_criterion adapter.

Steps for Bencher Self-Hosted

  1. Create a Bencher Self-Hosted instance.
  2. Create an account on your Bencher Self-Hosted instance.
  3. Create an API token and add it to your CI as a secret.
  4. Create a workflow for your CI, like GitHub Actions or GitLab CI/CD.
  5. Install the Bencher CLI in your CI workflow. Make sure the CLI version matches the version of your Bencher Self-Hosted instance.
  6. Run your benchmarks with the bencher run subcommand in your CI workflow using the rust_criterion adapter and setting the --host option to your Bencher Self-Hosted instance URL.

🦀 Rust Criterion

The Rust Criterion Adapter (rust_criterion) expects Criterion output. The latency Measure (ie nanoseconds (ns)) is gathered. The lower_value and upper_value are the lower and upper bounds of either the slope (if available) or the mean (if not) (ie value) respectively.

Terminal window
bencher run --adapter rust_criterion "cargo bench"

Track your benchmarks in CI

Have you ever had a performance regression impact your users? Bencher could have prevented that from happening with continuous benchmarking.



Published: Sat, November 9, 2024 at 7:15:00 PM UTC