Quick Start


What is Bencher?

Bencher is a suite of continuous benchmarking tools. Have you ever had a performance regression impact your users? Bencher could have prevented that from happening. Bencher allows you to detect and prevent performance regressions before they make it to production.

  • Run: Run your benchmarks locally or in CI using your favorite benchmarking tools. The bencher CLI simply wraps your existing benchmark harness and stores its results.
  • Track: Track the results of your benchmarks over time. Monitor, query, and graph the results using the Bencher web console based on the source branch, testbed, benchmark, and measure.
  • Catch: Catch performance regressions in CI. Bencher uses state of the art, customizable analytics to detect performance regressions before they make it to production.

For the same reasons that unit tests are run in CI to prevent feature regressions, benchmarks should be run in CI with Bencher to prevent performance regressions. Performance bugs are bugs!


Install bencher CLI

Linux, Mac, & Unix

For Linux, Mac, and other Unix-like systems run the following in your terminal:

Terminal window
curl --proto '=https' --tlsv1.2 -sSfL https://bencher.dev/download/install-cli.sh | sh

Windows

For Windows systems run the following in a PowerShell terminal:

Terminal window
irm https://bencher.dev/download/install-cli.ps1 | iex

🐰 If you get an error that says running scripts is disabled on this system:

  • Open Powershell with Run as Administrator
  • Run: Set-ExecutionPolicy -ExecutionPolicy RemoteSigned
  • Enter: Y
  • Rerun this script

For additional installation options, see how to install the bencher CLI.

Now, lets test that we have the bencher CLI installed.

Run: bencher --version

You should see:

$ bencher --version
bencher 0.4.21

Create a Bencher Cloud Account

Bencher is open source and self-hostable. If you are interested in self-hosting, check out the Bencher Docker tutorial. For this tutorial though, we are going to use Bencher Cloud.

Sign up for Bencher Cloud


Once you have created an account, you will need to confirm your email address. Check your email for a confirmation link. After that, you should be logged in to Bencher Cloud.


Create an API Token

In order to use the Bencher API, you will need to create an API token. Navigate to the Bencher Console. Hover over your name in top right corner. A dropdown menu should appear. Select Tokens. Once on the API Tokens page, click the ➕ Add button.

Add an API Token


Once you have created your new API token, you will need to copy it to your clipboard. In the terminal you plan to work in, export the API token as an environment variable.

On Linux, Mac, and other Unix-like systems run: export BENCHER_API_TOKEN=YOUR_TOKEN

On Windows run: $env:BENCHER_API_TOKEN = "YOUR_TOKEN"

If you then run echo $BENCHER_API_TOKEN or Write-Output $env:BENCHER_API_TOKEN respectively.

You should see:

$ echo $BENCHER_API_TOKEN
YOUR_TOKEN

🐰 Note: If you move to a different terminal, you will need to export the API token again.


Create a Project

Now that we have a user account and API token, we can create a Project. First, we need to know which Organization our new Project will belong to.

Run: bencher org list

You should see something like:

$ bencher org list
[
{
"name": "Saul Goodman",
"slug": "saul-goodman",
"uuid": "4581feb0-6cac-40a9-bd8a-d7865183b01e"
"created": "2022-07-06T11:24:36Z",
"modified": "2022-07-06T11:24:36Z"
}
]

Your output should be slightly different than the above:

  • The uuid is pseudorandom
  • The name and slug will be based on your username
  • The created and modified timestamps will be from when you just signed up

We can now create a new Project inside of your Organization. Substitute your Organization slug for the organization argument (ie YOUR_ORG_SLUG) in the command below.

Run: bencher project create YOUR_ORG_SLUG --name "Save Walter White" --url http://www.savewalterwhite.com

You should see something like:

$ bencher project create saul-goodman --name "Save Walter White" --url http://www.savewalterwhite.com
{
"organization": "4581feb0-6cac-40a9-bd8a-d7865183b01e",
"name": "Save Walter White",
"slug": "save-walter-white-1234abcd",
"uuid": "c6c2a8e8-685e-4413-9a19-5b79053a71b1"
"url": "http://www.savewalterwhite.com",
"public": true,
"created": "2022-07-06T11:36:24Z",
"modified": "2022-07-06T11:36:24Z"
}

Again, your output should be slightly different than the above. It’s just important that this command works. Take note of the Project slug field (ie save-walter-white-1234abcd).


Run a Report

We are finally ready to collect some benchmark metrics! For simplicity’s sake, we will be using mock data in this tutorial.

Run: bencher mock

You should see something like:

$ bencher mock
{
"bencher::mock_0": {
"latency": {
"value": 3.7865423396154463,
"lower_value": 3.4078881056539014,
"upper_value": 4.165196573576991
}
},
"bencher::mock_1": {
"latency": {
"value": 16.398332128878437,
"lower_value": 14.758498915990593,
"upper_value": 18.03816534176628
}
},
"bencher::mock_2": {
"latency": {
"value": 20.88091359871672,
"lower_value": 18.792822238845048,
"upper_value": 22.969004958588393
}
},
"bencher::mock_3": {
"latency": {
"value": 33.88103801203782,
"lower_value": 30.492934210834036,
"upper_value": 37.2691418132416
}
},
"bencher::mock_4": {
"latency": {
"value": 40.90515638867921,
"lower_value": 36.81464074981129,
"upper_value": 44.99567202754713
}
}
}

Your output should be slightly different than the above, as the data are pseudorandom. It’s just important that this command works.


Now lets run a report using mock benchmark metric data. Substitute your Project slug for the --project argument (ie YOUR_PROJECT_SLUG) in the command below.

Run: bencher run --project YOUR_PROJECT_SLUG "bencher mock"

You should see something like:

$ bencher run --project save-walter-white-1234abcd "bencher mock"
{
"bencher::mock_0": {
"latency": {
"value": 0.15496641529475275,
"lower_value": 0.13946977376527747,
"upper_value": 0.17046305682422802
}
},
"bencher::mock_1": {
"latency": {
"value": 18.648298578180437,
"lower_value": 16.783468720362393,
"upper_value": 20.513128435998482
}
},
"bencher::mock_2": {
"latency": {
"value": 28.20328182167366,
"lower_value": 25.382953639506294,
"upper_value": 31.023610003841025
}
},
"bencher::mock_3": {
"latency": {
"value": 34.45732560787596,
"lower_value": 31.01159304708836,
"upper_value": 37.903058168663556
}
},
"bencher::mock_4": {
"latency": {
"value": 44.9237520767597,
"lower_value": 40.43137686908373,
"upper_value": 49.41612728443567
}
}
}
View results:
- bencher::mock_0: https://bencher.dev/console/projects/save-walter-white-1234abcd/perf?measures=4358146b-b647-4869-9d24-bd22bb0c49b5&branches=95521eff-09fa-4c02-abe1-dd824108869d&testbeds=5b4a6f3e-a27d-4cc3-a2ce-851dc6421e6e&benchmarks=88375e7c-f1e0-4cbb-bde1-bdb7773022ae
- bencher::mock_1: https://bencher.dev/console/projects/save-walter-white-1234abcd/perf?measures=4358146b-b647-4869-9d24-bd22bb0c49b5&branches=95521eff-09fa-4c02-abe1-dd824108869d&testbeds=5b4a6f3e-a27d-4cc3-a2ce-851dc6421e6e&benchmarks=e81c7863-cc4b-4e22-b507-c1e238871137
- bencher::mock_2: https://bencher.dev/console/projects/save-walter-white-1234abcd/perf?measures=4358146b-b647-4869-9d24-bd22bb0c49b5&branches=95521eff-09fa-4c02-abe1-dd824108869d&testbeds=5b4a6f3e-a27d-4cc3-a2ce-851dc6421e6e&benchmarks=31dede44-d23a-4baf-b639-63f2ac742e42
- bencher::mock_3: https://bencher.dev/console/projects/save-walter-white-1234abcd/perf?measures=4358146b-b647-4869-9d24-bd22bb0c49b5&branches=95521eff-09fa-4c02-abe1-dd824108869d&testbeds=5b4a6f3e-a27d-4cc3-a2ce-851dc6421e6e&benchmarks=c7e32369-f3dd-473d-99a3-6289ae32b38e
- bencher::mock_4: https://bencher.dev/console/projects/save-walter-white-1234abcd/perf?measures=4358146b-b647-4869-9d24-bd22bb0c49b5&branches=95521eff-09fa-4c02-abe1-dd824108869d&testbeds=5b4a6f3e-a27d-4cc3-a2ce-851dc6421e6e&benchmarks=779bc477-4964-4bae-aa8c-4da3e388822c

You can now view the results from each of the benchmarks in the browser. Click or copy and paste the links from View results. There should only be a single data point for each benchmark, so lets add some more data!


First, lets set our Project slug as an environment variable, so we don’t have to provide it with the --project on every single run.

Run: export BENCHER_PROJECT=save-walter-white-1234abcd

If you then run: echo $BENCHER_PROJECT

You should see:

$ echo $BENCHER_PROJECT
save-walter-white-1234abcd

Lets rerun the same command again without --project to generate more data.

Run: bencher run "bencher mock"


Now, lets generate more data, but this time we will pipe our results into bencher run.

Run: bencher mock | bencher run


Sometimes you may want to save your results to a file and have bencher run pick them up.

Run: bencher run --file results.json "bencher mock > results.json"


Likewise, you may have a separate process run your benchmarks and save your results to a file. Then bencher run will just pick them up.

Run: bencher mock > results.json && bencher run --file results.json


Finally, lets seed a lot of data using the bencher run --iter argument.

Run: bencher run --iter 16 "bencher mock"


🐰 Tip: Checkout the bencher run CLI Subcommand docs for a full overview of all that bencher run can do!


Generate an Alert

Now that we have some historical data for our benchmarks, lets generate an Alert! Alerts are generated when a benchmark result is determined to be a performance regression. So lets simulate a performance regression!

Run: bencher run "bencher mock --pow 8"


There should be a new section at the end of the output called View alerts:

View alerts:
- bencher::mock_0: https://bencher.dev/console/projects/save-walter-white-1234abcd/alerts/b2329d5a-4471-48ab-bfbd-959d46ba1aa6
- bencher::mock_1: https://bencher.dev/console/projects/save-walter-white-1234abcd/alerts/181b1cf5-d984-402a-b0f1-68f6f119fa66
- bencher::mock_2: https://bencher.dev/console/projects/save-walter-white-1234abcd/alerts/b9b6c904-c657-4908-97db-dbeca40f8782
- bencher::mock_3: https://bencher.dev/console/projects/save-walter-white-1234abcd/alerts/5567ff32-2829-4b6a-969a-af33ce3828db
- bencher::mock_4: https://bencher.dev/console/projects/save-walter-white-1234abcd/alerts/49f2768f-ccda-4933-8e1d-08948f57a74d

You can now view the Alerts for each benchmark in the browser. Click or copy and paste the links from View alerts.


🐰 Tip: Checkout the Threshold & Alerts docs for a full overview of how performance regressions are detected!



🐰 Congrats! You caught your first perform regression! 🎉


Keep Going: How to Track Benchmarks in CI ➡



Published: Sat, August 12, 2023 at 9:07:00 PM UTC | Last Updated: Sun, September 29, 2024 at 12:25:00 PM UTC