Quick Start
What is Bencher?
Bencher is a suite of continuous benchmarking tools. Have you ever had a performance regression impact your users? Bencher could have prevented that from happening. Bencher allows you to detect and prevent performance regressions before they make it to production.
- Run: Run your benchmarks locally or in CI using your favorite benchmarking tools. The
bencher
CLI simply wraps your existing benchmark harness and stores its results. - Track: Track the results of your benchmarks over time. Monitor, query, and graph the results using the Bencher web console based on the source branch, testbed, benchmark, and measure.
- Catch: Catch performance regressions in CI. Bencher uses state of the art, customizable analytics to detect performance regressions before they make it to production.
For the same reasons that unit tests are run in CI to prevent feature regressions, benchmarks should be run in CI with Bencher to prevent performance regressions. Performance bugs are bugs!
Install bencher
CLI
Linux, Mac, & Unix
For Linux, Mac, and other Unix-like systems run the following in your terminal:
Windows
For Windows systems run the following in a PowerShell terminal:
🐰 If you get an error that says
running scripts is disabled on this system
:
Open Powershell
withRun as Administrator
- Run:
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned
- Enter:
Y
- Rerun this script
For additional installation options, see how to install the bencher
CLI.
Now, lets test that we have the bencher
CLI installed.
Run: bencher --version
You should see:
Create a Bencher Cloud Account
Bencher is open source and self-hostable. If you are interested in self-hosting, check out the Bencher Docker tutorial. For this tutorial though, we are going to use Bencher Cloud.
Sign up for Bencher Cloud
Once you have created an account, you will need to confirm your email address. Check your email for a confirmation link. After that, you should be logged in to Bencher Cloud.
Create an API Token
In order to use the Bencher API, you will need to create an API token.
Navigate to the Bencher Console.
Hover over your name in top right corner.
A dropdown menu should appear. Select Tokens
.
Once on the API Tokens page, click the ➕ Add
button.
Add an API Token
Once you have created your new API token, you will need to copy it to your clipboard. In the terminal you plan to work in, export the API token as an environment variable.
On Linux, Mac, and other Unix-like systems run: export BENCHER_API_TOKEN=YOUR_TOKEN
On Windows run: $env:BENCHER_API_TOKEN = "YOUR_TOKEN"
If you then run echo $BENCHER_API_TOKEN
or Write-Output $env:BENCHER_API_TOKEN
respectively.
You should see:
🐰 Note: If you move to a different terminal, you will need to export the API token again.
Create a Project
Now that we have a user account and API token, we can create a Project. First, we need to know which Organization our new Project will belong to.
Run: bencher org list
You should see something like:
Your output should be slightly different than the above:
- The
uuid
is pseudorandom - The
name
andslug
will be based on your username - The
created
andmodified
timestamps will be from when you just signed up
We can now create a new Project inside of your Organization.
Substitute your Organization slug
for the organization
argument (ie YOUR_ORG_SLUG
) in the command below.
Run: bencher project create YOUR_ORG_SLUG --name "Save Walter White" --url http://www.savewalterwhite.com
You should see something like:
Again, your output should be slightly different than the above.
It’s just important that this command works.
Take note of the Project slug
field (ie save-walter-white-1234abcd
).
Run a Report
We are finally ready to collect some benchmark metrics! For simplicity’s sake, we will be using mock data in this tutorial.
Run: bencher mock
You should see something like:
Your output should be slightly different than the above, as the data are pseudorandom. It’s just important that this command works.
Now lets run a report using mock benchmark metric data.
Substitute your Project slug
for the --project
argument (ie YOUR_PROJECT_SLUG
) in the command below.
Run: bencher run --project YOUR_PROJECT_SLUG "bencher mock"
You should see something like:
You can now view the results from each of the benchmarks in the browser.
Click or copy and paste the links from View results
.
There should only be a single data point for each benchmark, so lets add some more data!
First, lets set our Project slug as an environment variable, so we don’t have to provide it with the --project
on every single run.
Run: export BENCHER_PROJECT=save-walter-white-1234abcd
If you then run: echo $BENCHER_PROJECT
You should see:
Lets rerun the same command again without --project
to generate more data.
Run: bencher run "bencher mock"
Now, lets generate more data, but this time we will pipe our results into bencher run
.
Run: bencher mock | bencher run
Sometimes you may want to save your results to a file and have bencher run
pick them up.
Run: bencher run --file results.json "bencher mock > results.json"
Likewise, you may have a separate process run your benchmarks and save your results to a file. Then bencher run
will just pick them up.
Run: bencher mock > results.json && bencher run --file results.json
Finally, lets seed a lot of data using the bencher run
--iter
argument.
Run: bencher run --iter 16 "bencher mock"
🐰 Tip: Checkout the
bencher run
CLI Subcommand docs for a full overview of all thatbencher run
can do!
Generate an Alert
Now that we have some historical data for our benchmarks, lets generate an Alert! Alerts are generated when a benchmark result is determined to be a performance regression. So lets simulate a performance regression!
Run: bencher run "bencher mock --pow 8"
There should be a new section at the end of the output called View alerts
:
You can now view the Alerts for each benchmark in the browser.
Click or copy and paste the links from View alerts
.
🐰 Tip: Checkout the Threshold & Alerts docs for a full overview of how performance regressions are detected!
🐰 Congrats! You caught your first perform regression! 🎉