How to benchmark Rust code with Criterion

Everett Pompeii
What is Benchmarking?
Benchmarking is the practice of testing the performance of your code to see how fast (latency) or how much work (throughput) it can do. This often overlooked step in software development is crucial for creating and maintaining fast and performant code. Benchmarking provides the necessary metrics for developers to understand how well their code performs under various workloads and conditions. For the same reasons that you write unit and integration tests to prevent feature regressions, you should write benchmarks to prevent performance regressions. Performance bugs are bugs!
Write FizzBuzz in Rust
In order to write benchmarks, we need some source code to benchmark. To start off we are going to write a very simple program, FizzBuzz.
The rules for FizzBuzz are as follows:
Write a program that prints the integers from
1
to100
(inclusive):
- For multiples of three, print
Fizz
- For multiples of five, print
Buzz
- For multiples of both three and five, print
FizzBuzz
- For all others, print the number
There are many ways to write FizzBuzz. So we’ll go with the my favorite:
fn main() { for i in 1..=100 { match (i % 3, i % 5) { (0, 0) => println!("FizzBuzz"), (0, _) => println!("Fizz"), (_, 0) => println!("Buzz"), (_, _) => println!("{i}"), } }}
- Create a
main
function - Iterate from
1
to100
inclusively. - For each number, calculate the modulus (remainder after division) for both
3
and5
. - Pattern match on the two remainders.
If the remainder is
0
, then the number is a multiple of the given factor. - If the remainder is
0
for both3
and5
then printFizzBuzz
. - If the remainder is
0
for only3
then printFizz
. - If the remainder is
0
for only5
then printBuzz
. - Otherwise, just print the number.
Follow Step-by-Step
In order to follow along with this set-by-step tutorial, you will need to install Rust.
🐰 The source code for this post is available on GitHub.
With Rust installed, you can then open a terminal window and enter: cargo init game
Then navigate into the newly created game
directory.
game├── Cargo.toml└── src └── main.rs
You should see a directory called src
with file named main.rs
:
fn main() { println!("Hello, world!");}
Replace its contents with the above FizzBuzz implementation. Then run cargo run
.
The output should look like:
$ cargo run Compiling playground v0.0.1 (/home/bencher) Finished dev [unoptimized + debuginfo] target(s) in 0.44s Running `target/debug/game`
12Fizz4BuzzFizz78FizzBuzz11Fizz1314FizzBuzz...9798FizzBuzz
🐰 Boom! You’re cracking the coding interview!
A new Cargo.lock
file should have been generated:
game├── Cargo.lock├── Cargo.toml└── src └── main.rs
Before going any further, it is important to discuss the differences between micro-benchmarking and macro-benchmarking.
Micro-Benchmarking vs Macro-Benchmarking
There are two major categories of software benchmarks: micro-benchmarks and macro-benchmarks.
Micro-benchmarks operate at a level similar to unit tests.
For example, a benchmark for a function that determines Fizz
, Buzz
, or FizzBuzz
for a single number would be a micro-benchmark.
Macro-benchmarks operate at a level similar to integration tests.
For example, a benchmark for a function that plays the entire game of FizzBuzz, from 1
to 100
, would be a macro-benchmark.
Generally, it is best to test at the lowest level of abstraction possible. In the case benchmarks, this makes them both easier to maintain, and it helps to reduce the amount of noise in the measurements. However, just as having some end-to-end tests can be very useful for sanity checking the entire system comes together as expected, having macro-benchmarks can be very useful for making sure that the critical paths through your software remain performant.
Benchmarking in Rust
The three popular options for benchmarking in Rust are: libtest bench, Criterion, and Iai.
libtest is Rust’s built-in unit testing and benchmarking framework.
Though part of the Rust standard library, libtest bench is still considered unstable,
so it is only available on nightly
compiler releases.
To work on the stable Rust compiler,
a separate benchmarking harness
needs to be used.
Neither is being actively developed, though.
The most popular benchmarking harness within the Rust ecosystem is Criterion.
It works on both stable and nightly
Rust compiler releases,
and it has become the de facto standard within the Rust community.
Criterion is also much more feature-rich compared to libtest bench.
An experimental alternative to Criterion is Iai, from the same creator as Criterion. However, it uses instruction counts instead of wall clock time: CPU instructions, L1 accesses, L2 access and RAM accesses. This allows for single-shot benchmarking since these metrics should stay nearly identical between runs.
All three are support by Bencher. So why choose Criterion? Criterion is the de facto standard benchmarking harness in the Rust community. I would suggest using Criterion for benchmarking your code’s latency. That is, Criterion is great for measuring wall clock time.
Refactor FizzBuzz
In order to test our FizzBuzz application, we need to decouple our logic from our program’s main
function.
Benchmark harnesses can’t benchmark the main
function. In order to do this, we need to make few changes.
Under src
, create a new file named lib.rs
:
game├── Cargo.lock├── Cargo.toml└── src └── lib.rs └── main.rs
Add the following code to lib.rs
:
pub fn play_game(n: u32, print: bool) { let result = fizz_buzz(n); if print { println!("{result}"); }}
pub fn fizz_buzz(n: u32) -> String { match (n % 3, n % 5) { (0, 0) => "FizzBuzz".to_string(), (0, _) => "Fizz".to_string(), (_, 0) => "Buzz".to_string(), (_, _) => n.to_string(), }}
play_game
: Takes in an unsigned integern
, callsfizz_buzz
with that number, and ifprint
istrue
print the result.fizz_buzz
: Takes in an unsigned integern
and performs the actualFizz
,Buzz
,FizzBuzz
, or number logic returning the result as a string.
Then update main.rs
to look like this:
use game::play_game;
fn main() { for i in 1..=100 { play_game(i, true); }}
game::play_game
: Importplay_game
from thegame
crate we just created withlib.rs
.main
: The main entrypoint into our program that iterates through the numbers1
to100
inclusive and callsplay_game
for each number, withprint
set totrue
.
Benchmarking FizzBuzz
In order to benchmark our code, we need to create a benches
directory and add file to contain our benchmarks, play_game.rs
:
game├── Cargo.lock├── Cargo.toml└── benches └── play_game.rs└── src └── lib.rs └── main.rs
Inside of play_game.rs
add the following code:
use criterion::{criterion_group, criterion_main, Criterion};
use game::play_game;
fn bench_play_game(c: &mut Criterion) { c.bench_function("bench_play_game", |b| { b.iter(|| { std::hint::black_box(for i in 1..=100 { play_game(i, false) }); }); });}
criterion_group!( benches, bench_play_game,);criterion_main!(benches);
- Import the
Criterion
benchmark runner. - Import the
play_game
function from ourgame
crate. - Create a function named
bench_play_game
that takes in a mutable reference toCriterion
. - Use the
Criterion
instance (c
) to create a benchmark namedbench_play_game
. - Then use the benchmark runner (
b
) to run our macro-benchmark several times. - Run our macro-benchmark inside of a “black box” so the compiler doesn’t optimize our code.
- Iterate from
1
to100
inclusively. - For each number, call
play_game
, withprint
set tofalse
.
Now we need to configure the game
crate to run our benchmarks.
Add the following to the bottom of your Cargo.toml
file:
[dev-dependencies]criterion = "0.5"
[[bench]]name = "play_game"harness = false
criterion
: Addcriterion
as a development dependency, since we are only using it for performance testing.bench
: Registerplay_game
as a benchmark and setharness
tofalse
, since we will be using Criterion as our benchmarking harness.
Now we’re ready to benchmark our code, run cargo bench
:
$ cargo bench Compiling playground v0.0.1 (/home/bencher) Finished bench [optimized] target(s) in 4.79s Running unittests src/main.rs (target/release/deps/game-68f58c96f4025bd4)
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Running unittests src/main.rs (target/release/deps/game-043972c4132076a9)
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Running benches/play_game.rs (target/release/deps/play_game-e0857103eb02eb56)bench_play_game time: [3.0020 µs 3.0781 µs 3.1730 µs]Found 12 outliers among 100 measurements (12.00%) 2 (2.00%) high mild 10 (10.00%) high severe
🐰 Lettuce turnip the beet! We’ve got our first benchmark metrics!
Finally, we can rest our weary developer heads… Just kidding, our users want a new feature!
Write FizzBuzzFibonacci in Rust
Our Key Performance Indicators (KPIs) are down, so our Product Manager (PM) wants us to add a new feature. After much brainstorming and many user interviews, it is decided that good ole FizzBuzz isn’t enough. Kids these days want a new game, FizzBuzzFibonacci.
The rules for FizzBuzzFibonacci are as follows:
Write a program that prints the integers from
1
to100
(inclusive):
- For multiples of three, print
Fizz
- For multiples of five, print
Buzz
- For multiples of both three and five, print
FizzBuzz
- For numbers that are part of the Fibonacci sequence, only print
Fibonacci
- For all others, print the number
The Fibonacci sequence is a sequence in which each number is the sum of the two preceding numbers.
For example, starting at 0
and 1
the next number in the Fibonacci sequence would be 1
.
Followed by: 2
, 3
, 5
, 8
and so on.
Numbers that are part of the Fibonacci sequence are known as Fibonacci numbers. So we’re going to have to write a function that detects Fibonacci numbers.
There are many ways to write the Fibonacci sequence and likewise many ways to detect a Fibonacci number. So we’ll go with the my favorite:
fn is_fibonacci_number(n: u32) -> bool { for i in 0..=n { let (mut previous, mut current) = (0, 1); while current < i { let next = previous + current; previous = current; current = next; } if current == n { return true; } } false}
- Create a function named
is_fibonacci_number
that takes in an unsigned integer and returns a boolean. - Iterate for all number from
0
to our given numbern
inclusive. - Initialize our Fibonacci sequence starting with
0
and1
as theprevious
andcurrent
numbers respectively. - Iterate while the
current
number is less than the current iterationi
. - Add the
previous
andcurrent
number to get thenext
number. - Update the
previous
number to thecurrent
number. - Update the
current
number to thenext
number. - Once
current
is greater than or equal to the given numbern
, we will exit the loop. - Check to see is the
current
number is equal to the given numbern
and if so returntrue
. - Otherwise, return
false
.
Now we will need to update our fizz_buzz
function:
pub fn fizz_buzz_fibonacci(n: u32) -> String { if is_fibonacci_number(n) { "Fibonacci".to_string() } else { match (n % 3, n % 5) { (0, 0) => "FizzBuzz".to_string(), (0, _) => "Fizz".to_string(), (_, 0) => "Buzz".to_string(), (_, _) => n.to_string(), } }}
- Rename the
fizz_buzz
function tofizz_buzz_fibonacci
to make it more descriptive. - Call our
is_fibonacci_number
helper function. - If the result from
is_fibonacci_number
istrue
then returnFibonacci
. - If the result from
is_fibonacci_number
isfalse
then perform the sameFizz
,Buzz
,FizzBuzz
, or number logic returning the result.
Because we renamed fizz_buzz
to fizz_buzz_fibonacci
we also need to update our play_game
function:
pub fn play_game(n: u32, print: bool) { let result = fizz_buzz_fibonacci(n); if print { println!("{result}"); }}
Both our main
and bench_play_game
functions can stay exactly the same.
Benchmarking FizzBuzzFibonacci
Now we can rerun our benchmark:
$ cargo bench Compiling playground v0.0.1 (/home/bencher) Finished bench [optimized] target(s) in 4.79s Running unittests src/main.rs (target/release/deps/game-68f58c96f4025bd4)
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Running unittests src/main.rs (target/release/deps/game-043972c4132076a9)
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Running benches/play_game.rs (target/release/deps/play_game-e0857103eb02eb56)bench_play_game time: [20.067 µs 20.107 µs 20.149 µs] change: [+557.22% +568.69% +577.93%] (p = 0.00 < 0.05) Performance has regressed.Found 6 outliers among 100 measurements (6.00%) 4 (4.00%) high mild 2 (2.00%) high severe
Oh, neat! Criterion tells us the difference between the performance of our FizzBuzz and FizzBuzzFibonacci games is +568.69%
.
Your numbers will be a little different than mine.
However, the difference between the two games is likely in the 5x
range.
That seems good to me! Especially for adding a feature as fancy sounding as Fibonacci to our game.
The kids will love it!
Expand FizzBuzzFibonacci in Rust
Our game is a hit! The kids do indeed love playing FizzBuzzFibonacci.
So much so that word has come down from the execs that they want a sequel.
But this is the modern world, we need Annual Recurring Revenue (ARR) not one time purchases!
The new vision for our game is that it is open ended, no more living between the bounds of 1
and 100
(even if they are inclusive).
No, we’re on to new frontiers!
The rules for Open World FizzBuzzFibonacci are as follows:
Write a program that takes in any positive integer and prints:
- For multiples of three, print
Fizz
- For multiples of five, print
Buzz
- For multiples of both three and five, print
FizzBuzz
- For numbers that are part of the Fibonacci sequence, only print
Fibonacci
- For all others, print the number
In order to have our game work for any number, we will need to accept a command line argument.
Update the main
function to look like this:
fn main() { let args: Vec<String> = std::env::args().collect(); let i = args .get(1) .map(|s| s.parse::<u32>()) .unwrap_or(Ok(15)) .unwrap_or(15); play_game(i, true);}
- Collect all of the arguments (
args
) passed to our game from the command line. - Get the first argument passed to our game and parse it as an unsigned integer
i
. - If parsing fails or no argument is passed in, default to playing our game with
15
as the input. - Finally, play our game with the newly parsed unsigned integer
i
.
Now we can play our game with any number!
Use cargo run
followed by --
to pass arguments to our game:
$ cargo run -- 9 Compiling playground v0.0.1 (/home/bencher) Finished dev [unoptimized + debuginfo] target(s) in 0.44s Running `target/debug/game 9`Fizz
$ cargo run -- 10 Finished dev [unoptimized + debuginfo] target(s) in 0.03s Running `target/debug/game 10`Buzz
$ cargo run -- 13 Finished dev [unoptimized + debuginfo] target(s) in 0.04s Running `target/debug/game 13`Fibonacci
And if we omit or provide an invalid number:
$ cargo run Finished dev [unoptimized + debuginfo] target(s) in 0.03s Running `target/debug/game`FizzBuzz
$ cargo run -- bad Finished dev [unoptimized + debuginfo] target(s) in 0.05s Running `target/debug/game bad`FizzBuzz
Wow, that was some thorough testing! CI passes. Our bosses are thrilled. Let’s ship it! 🚀
The End


🐰 … the end of your career maybe?
Just kidding! Everything is on fire! 🔥
Well, at first everything seemed to be going fine. And then at 02:07 AM on Saturday my pager went off:
📟 Your game is on fire! 🔥
After scrambling out of bed, I tried to figure out what was going on. I tried to search through the logs, but that was hard because everything kept crashing. Finally, I found the issue. The kids! They loved our game so much, they were playing it all the way up to a million! In a flash of brilliance, I added two new benchmarks:
fn bench_play_game_100(c: &mut Criterion) { c.bench_function("bench_play_game_100", |b| { b.iter(|| std::hint::black_box(play_game(100, false))); });}
fn bench_play_game_1_000_000(c: &mut Criterion) { c.bench_function("bench_play_game_1_000_000", |b| { b.iter(|| std::hint::black_box(play_game(1_000_000, false))); });}
- A micro-benchmark
bench_play_game_100
for playing the game with the number one hundred (100
) - A micro-benchmark
bench_play_game_1_000_000
for playing the game with the number one million (1_000_000
)
When I ran it, I got this:
$ cargo bench Compiling playground v0.0.1 (/home/bencher) Finished bench [optimized] target(s) in 4.79s Running unittests src/main.rs (target/release/deps/game-68f58c96f4025bd4)
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Running unittests src/main.rs (target/release/deps/game-043972c4132076a9)
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Running benches/play_game.rs (target/release/deps/play_game-e0857103eb02eb56)bench_play_game time: [20.024 µs 20.058 µs 20.096 µs] change: [-0.0801% +0.1431% +0.3734%] (p = 0.21 > 0.05) No change in performance detected.Found 17 outliers among 100 measurements (17.00%) 9 (9.00%) high mild 8 (8.00%) high severe
bench_play_game_100 time: [403.00 ns 403.57 ns 404.27 ns]Found 13 outliers among 100 measurements (13.00%) 6 (6.00%) high mild 7 (7.00%) high severe
Wait for it… wait for it…
bench_play_game_1_000_000 time: [9.5865 ms 9.5968 ms 9.6087 ms]Found 16 outliers among 100 measurements (16.00%) 8 (8.00%) high mild 8 (8.00%) high severe
What! 403.57 ns
x 1,000
should be 403,570 ns
not 9,596,800 ns
(9.5968 ms
x 1_000_000 ns/1 ms
) 🤯
Even though I got my Fibonacci sequence code functionally correct, I must have a performance bug in there somewhere.
Fix FizzBuzzFibonacci in Rust
Let’s take another look at that is_fibonacci_number
function:
fn is_fibonacci_number(n: u32) -> bool { for i in 0..=n { let (mut previous, mut current) = (0, 1); while current < i { let next = previous + current; previous = current; current = next; } if current == n { return true; } } false}
Now that I’m thinking about performance, I do realize that I have an unnecessary, extra loop.
We can completely get rid of the for i in 0..=n {}
loop and
just compare the current
value to the given number (n
) 🤦
fn is_fibonacci_number(n: u32) -> bool { let (mut previous, mut current) = (0, 1); while current < n { let next = previous + current; previous = current; current = next; } current == n}
- Update our
is_fibonacci_number
function. - Initialize our Fibonacci sequence starting with
0
and1
as theprevious
andcurrent
numbers respectively. - Iterate while the
current
number is less than the given numbern
. - Add the
previous
andcurrent
number to get thenext
number. - Update the
previous
number to thecurrent
number. - Update the
current
number to thenext
number. - Once
current
is greater than or equal to the given numbern
, we will exit the loop. - Check to see if the
current
number is equal to the given numbern
and return that result.
Now lets rerun those benchmarks and see how we did:
$ cargo bench Compiling playground v0.0.1 (/home/bencher) Finished bench [optimized] target(s) in 4.79s Running unittests src/main.rs (target/release/deps/game-68f58c96f4025bd4)
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Running unittests src/main.rs (target/release/deps/game-043972c4132076a9)
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Running benches/play_game.rs (target/release/deps/play_game-e0857103eb02eb56)bench_play_game time: [3.1201 µs 3.1772 µs 3.2536 µs] change: [-84.469% -84.286% -84.016%] (p = 0.00 < 0.05) Performance has improved.Found 5 outliers among 100 measurements (5.00%) 1 (1.00%) high mild 4 (4.00%) high severe
bench_play_game_100 time: [24.460 ns 24.555 ns 24.650 ns] change: [-93.976% -93.950% -93.927%] (p = 0.00 < 0.05) Performance has improved.
bench_play_game_1_000_000 time: [30.260 ns 30.403 ns 30.564 ns] change: [-100.000% -100.000% -100.000%] (p = 0.00 < 0.05) Performance has improved.Found 4 outliers among 100 measurements (4.00%) 1 (1.00%) high mild 3 (3.00%) high severe
Oh, wow! Our bench_play_game
benchmark is back down to around where it was for the original FizzBuzz.
I wish I could remember exactly what that score was. It’s been three weeks though.
My terminal history doesn’t go back that far.
And Criterion only compares against the most recent result.
But I think it’s close!
The bench_play_game_100
benchmark is down nearly 10x, -93.950%
.
And the bench_play_game_1_000_000
benchmark is down more than 10,000x! 9,596,800 ns
to 30.403 ns
!
We even maxed out Criterion’s change meter, which only goes up to -100.000%
!
🐰 Hey, at least we caught this performance bug before it made it to production… oh, right. Nevermind…
Catch Performance Regressions in CI
The execs weren’t happy about the deluge of negative reviews our game received due to my little performance bug. They told me not to let it happen again, and when I asked how, they just told me not to do it again. How am I supposed to manage that‽
Luckily, I’ve found this awesome open source tool called Bencher. There’s a super generous free tier, so I can just use Bencher Cloud for my personal projects. And at work where everything needs to be in our private cloud, I’ve started using Bencher Self-Hosted.
Bencher has a built-in adapters, so it’s easy to integrate into CI. After following the Quick Start guide, I’m able to run my benchmarks and track them with Bencher.
$ bencher run --project game "cargo bench" Finished bench [optimized] target(s) in 0.07s Running unittests src/lib.rs (target/release/deps/game-13f4bad779fbfde4)
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Running unittests src/main.rs (target/release/deps/game-043972c4132076a9)
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
Running benches/play_game.rs (target/release/deps/play_game-e0857103eb02eb56)Gnuplot not found, using plotters backendbench_play_game time: [3.0713 µs 3.0902 µs 3.1132 µs]Found 16 outliers among 100 measurements (16.00%) 3 (3.00%) high mild 13 (13.00%) high severe
bench_play_game_100 time: [23.938 ns 23.970 ns 24.009 ns]Found 15 outliers among 100 measurements (15.00%) 5 (5.00%) high mild 10 (10.00%) high severe
bench_play_game_1_000_000 time: [30.004 ns 30.127 ns 30.279 ns]Found 5 outliers among 100 measurements (5.00%) 1 (1.00%) high mild 4 (4.00%) high severe
Bencher New Report:...View results:- bench_play_game (Latency): https://bencher.dev/console/projects/game/perf?measures=52507e04-ffd9-4021-b141-7d4b9f1e9194&branches=3a27b3ce-225c-4076-af7c-75adbc34ef9a&testbeds=bc05ed88-74c1-430d-b96a-5394fdd18bb0&benchmarks=077449e5-5b45-4c00-bdfb-3a277413180d&start_time=1697224006000&end_time=1699816009000&upper_boundary=true- bench_play_game_100 (Latency): https://bencher.dev/console/projects/game/perf?measures=52507e04-ffd9-4021-b141-7d4b9f1e9194&branches=3a27b3ce-225c-4076-af7c-75adbc34ef9a&testbeds=bc05ed88-74c1-430d-b96a-5394fdd18bb0&benchmarks=96508869-4fa2-44ac-8e60-b635b83a17b7&start_time=1697224006000&end_time=1699816009000&upper_boundary=true- bench_play_game_1_000_000 (Latency): https://bencher.dev/console/projects/game/perf?measures=52507e04-ffd9-4021-b141-7d4b9f1e9194&branches=3a27b3ce-225c-4076-af7c-75adbc34ef9a&testbeds=bc05ed88-74c1-430d-b96a-5394fdd18bb0&benchmarks=ff014217-4570-42ea-8813-6ed0284500a4&start_time=1697224006000&end_time=1699816009000&upper_boundary=true
Using this nifty time travel device that a nice rabbit gave me, I was able to go back in time and replay what would have happened if we were using Bencher all along. You can see where we first pushed the buggy FizzBuzzFibonacci implementation. I immediately got failures in CI as a comment on my pull request. That same day, I fixed the performance bug, getting rid of that needless, extra loop. No fires. Just happy users.
Bencher: Continuous Benchmarking
Bencher is a suite of continuous benchmarking tools. Have you ever had a performance regression impact your users? Bencher could have prevented that from happening. Bencher allows you to detect and prevent performance regressions before they make it to production.
- Run: Run your benchmarks locally or in CI using your favorite benchmarking tools. The
bencher
CLI simply wraps your existing benchmark harness and stores its results. - Track: Track the results of your benchmarks over time. Monitor, query, and graph the results using the Bencher web console based on the source branch, testbed, benchmark, and measure.
- Catch: Catch performance regressions in CI. Bencher uses state of the art, customizable analytics to detect performance regressions before they make it to production.
For the same reasons that unit tests are run in CI to prevent feature regressions, benchmarks should be run in CI with Bencher to prevent performance regressions. Performance bugs are bugs!
Start catching performance regressions in CI — try Bencher Cloud for free.