Bencher Self-Hosted with Docker Quick Start


What is Bencher?

Bencher is a suite of continuous benchmarking tools. Have you ever had a performance regression impact your users? Bencher could have prevented that from happening. Bencher allows you to detect and prevent performance regressions before they make it to production.

  • Run: Run your benchmarks locally or in CI using your favorite benchmarking tools. The bencher CLI simply wraps your existing benchmark harness and stores its results.
  • Track: Track the results of your benchmarks over time. Monitor, query, and graph the results using the Bencher web console based on the source branch, testbed, benchmark, and measure.
  • Catch: Catch performance regressions in CI. Bencher uses state of the art, customizable analytics to detect performance regressions before they make it to production.

For the same reasons that unit tests are run in CI to prevent feature regressions, benchmarks should be run in CI with Bencher to prevent performance regressions. Performance bugs are bugs!


Bencher Self-Hosted

Bencher is open source and self-hostable. If you are interested in using Bencher Cloud, check out the Bencher Cloud Quick Start tutorial. This tutorial will get you setup using Bencher Self-Hosted with Docker.

🐰 Once you feel comfortable using Bencher Self-Hosted, consider checking out the following resources:


Install Docker

In order to run the UI and API servers in this tutorial you will need to have docker installed. Check to see if you have docker installed.

Run: docker --version

You should see something like:

$ docker --version
Docker version 20.10.17, build 100c701

It is okay if your version number is different. It’s just important that this command works. If not follow the instructions for installing docker.


Install bencher CLI

Linux, Mac, & Unix

For Linux, Mac, and other Unix-like systems run the following in your terminal, with BENCHER_VERSION set to a recent version like 0.4.30:

Terminal window
export BENCHER_VERSION=0.4.30; curl --proto '=https' --tlsv1.2 -sSfL https://bencher.dev/download/install-cli.sh | sh

Windows

For Windows systems run the following in a PowerShell terminal, with BENCHER_VERSION set to a recent version like 0.4.30:

Terminal window
$env:BENCHER_VERSION="0.4.30"; irm https://bencher.dev/download/install-cli.ps1 | iex

🐰 If you get an error that says running scripts is disabled on this system:

  • Open Powershell with Run as Administrator
  • Run: Set-ExecutionPolicy -ExecutionPolicy RemoteSigned
  • Enter: Y
  • Rerun this script

For additional installation options, see how to install the bencher CLI.

Now, lets test that we have the bencher CLI installed.

Run: bencher --version

You should see:

$ bencher --version
bencher 0.4.30

Run Bencher UI & API Servers

With docker installed, we can now run the UI and API servers.

Run: bencher up

You should see something like:

$ bencher up
Pulling `ghcr.io/bencherdev/bencher-api:latest` image...
Creating `bencher_api` container...
Starting `bencher_api` container...
Pulling `ghcr.io/bencherdev/bencher-console:latest` image...
Creating `bencher_console` container...
Starting `bencher_console` container...
🐰 Bencher Self-Hosted is up and running!
Web Console: http://localhost:3000
API Server: http://localhost:61016
Press Ctrl+C to stop Bencher Self-Hosted.
🐰 Bencher Self-Hosted logs...
Jan 08 16:49:07.727 INFO 🐰 Bencher API Server v0.4.30
...

Again, it is okay if your output is different. It’s just important that this command works.

🐰 If you get an error from ghcr.io saying: "authentication required" Try running: docker logout ghcr.io

🐰 On Windows, if you get an error saying: image operating system "linux" cannot be used on this platform: operating system is not supported Try running: & 'C:\Program Files\Docker\Docker\DockerCli.exe' -SwitchLinuxEngine

Bencher API Server Logs

Because we haven’t set up email/SMTP on the API server yet, the confirmation codes you will receive later in the tutorial are going to be in the server logs, as shown above. That is, the authentication credentials are going to be shown in the output of bencher up.


Set Bencher Host

The bencher CLI default host is Bencher Cloud (ie https://api.bencher.dev). So we need to set your hostname. The easiest way to do this is the with BENCHER_HOST environment variable.

Open a new terminal window.

On Linux, Mac, and other Unix-like systems run: export BENCHER_HOST=http://localhost:61016

On Windows run: $env:BENCHER_HOST = "http://localhost:61016"

If you then run echo $BENCHER_HOST or Write-Output $env:BENCHER_HOST respectively.

You should see:

$ echo $BENCHER_HOST
http://localhost:61016

Create a Bencher Self-Hosted Account

Signup for Bencher Self-Hosted


Create an account on your local Bencher Self-Hosted instance by navigating to: http://localhost:3000/auth/signup

Once you have created an account, navigate back to the terminal window where you ran bencher up. You should see something like:

To: Saul Goodman <saul@bettercallsaul.com>>
Subject: Confirm Bencher Signup
Body:
Ahoy Saul Goodman,
Please, click the button below or use the provided token to signup for Bencher.
Confirm Email: http://localhost:3000/auth/confirm?token=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJhdWQiOiJhdXRoIiwiZXhwIjoxNzA0ODIwODIxLCJpYXQiOjE3MDQ4MTkwMjEsImlzcyI6Imh0dHA6Ly9sb2NhbGhvc3Q6MzAwMC8iLCJzdWIiOiJzYXVsQGJldHRlcmNhbGxzYXVsLmNvbSIsIm9yZyI6bnVsbH0.CKW4-MyOqY0AnRbs9h8tBtyAB6ck51PytytTsZSBOiA
Confirmation Token: eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJhdWQiOiJhdXRoIiwiZXhwIjoxNzA0ODIwODIxLCJpYXQiOjE3MDQ4MTkwMjEsImlzcyI6Imh0dHA6Ly9sb2NhbGhvc3Q6MzAwMC8iLCJzdWIiOiJzYXVsQGJldHRlcmNhbGxzYXVsLmNvbSIsIm9yZyI6bnVsbH0.CKW4-MyOqY0AnRbs9h8tBtyAB6ck51PytytTsZSBOiA
See you soon,
The Bencher Team
Bencher - Continuous Benchmarking
Manage email settings (http://localhost:3000/help)

Navigate to the Confirm Email link in your browser or copy the Confirmation Token into the Confirm Token field at: http://localhost:3000/auth/confirm

After that, you should be logged into your Bencher Self-Hosted account!


Create an API Token

In order to use the Bencher API, you will need to create an API token. Navigate to the Bencher Console. Hover over your name in top right corner. A dropdown menu should appear. Select Tokens. Once on the API Tokens page, click the ➕ Add button.

Add an API Token


Once you have created your new API token, you will need to copy it to your clipboard. In the terminal you plan to work in, export the API token as an environment variable.

On Linux, Mac, and other Unix-like systems run: export BENCHER_API_TOKEN=YOUR_TOKEN

On Windows run: $env:BENCHER_API_TOKEN = "YOUR_TOKEN"

If you then run echo $BENCHER_API_TOKEN or Write-Output $env:BENCHER_API_TOKEN respectively.

You should see:

$ echo $BENCHER_API_TOKEN
YOUR_TOKEN

🐰 Note: If you move to a different terminal, you will need to export the API token again.


Create a Project

Now that we have a user account and API token, we can create a Project. First, we need to know which Organization our new Project will belong to.

Run: bencher org list

You should see something like:

$ bencher org list
[
{
"name": "Saul Goodman",
"slug": "saul-goodman",
"uuid": "4581feb0-6cac-40a9-bd8a-d7865183b01e"
"created": "2022-07-06T11:24:36Z",
"modified": "2022-07-06T11:24:36Z"
}
]

Your output should be slightly different than the above:

  • The uuid is pseudorandom
  • The name and slug will be based on your username
  • The created and modified timestamps will be from when you just signed up

We can now create a new Project inside of your Organization. Substitute your Organization slug for the organization argument (ie YOUR_ORG_SLUG) in the command below.

Run: bencher project create YOUR_ORG_SLUG --name "Save Walter White" --url http://www.savewalterwhite.com

You should see something like:

$ bencher project create saul-goodman --name "Save Walter White" --url http://www.savewalterwhite.com
{
"organization": "4581feb0-6cac-40a9-bd8a-d7865183b01e",
"name": "Save Walter White",
"slug": "save-walter-white-1234abcd",
"uuid": "c6c2a8e8-685e-4413-9a19-5b79053a71b1"
"url": "http://www.savewalterwhite.com",
"public": true,
"created": "2022-07-06T11:36:24Z",
"modified": "2022-07-06T11:36:24Z"
}

Again, your output should be slightly different than the above. It’s just important that this command works. Take note of the Project slug field (ie save-walter-white-1234abcd).


Run a Report

We are finally ready to collect some benchmark metrics! For simplicity’s sake, we will be using mock data in this tutorial.

Run: bencher mock

You should see something like:

$ bencher mock
{
"bencher::mock_0": {
"latency": {
"value": 3.7865423396154463,
"lower_value": 3.4078881056539014,
"upper_value": 4.165196573576991
}
},
"bencher::mock_1": {
"latency": {
"value": 16.398332128878437,
"lower_value": 14.758498915990593,
"upper_value": 18.03816534176628
}
},
"bencher::mock_2": {
"latency": {
"value": 20.88091359871672,
"lower_value": 18.792822238845048,
"upper_value": 22.969004958588393
}
},
"bencher::mock_3": {
"latency": {
"value": 33.88103801203782,
"lower_value": 30.492934210834036,
"upper_value": 37.2691418132416
}
},
"bencher::mock_4": {
"latency": {
"value": 40.90515638867921,
"lower_value": 36.81464074981129,
"upper_value": 44.99567202754713
}
}
}

Your output should be slightly different than the above, as the data are pseudorandom. It’s just important that this command works.


Now lets run a report using mock benchmark metric data. Substitute your Project slug for the --project argument (ie YOUR_PROJECT_SLUG) in the command below.

Run: bencher run --project YOUR_PROJECT_SLUG "bencher mock"

You should see something like:

$ bencher run --project save-walter-white-1234abcd "bencher mock"
{
"bencher::mock_0": {
"latency": {
"value": 0.15496641529475275,
"lower_value": 0.13946977376527747,
"upper_value": 0.17046305682422802
}
},
"bencher::mock_1": {
"latency": {
"value": 18.648298578180437,
"lower_value": 16.783468720362393,
"upper_value": 20.513128435998482
}
},
"bencher::mock_2": {
"latency": {
"value": 28.20328182167366,
"lower_value": 25.382953639506294,
"upper_value": 31.023610003841025
}
},
"bencher::mock_3": {
"latency": {
"value": 34.45732560787596,
"lower_value": 31.01159304708836,
"upper_value": 37.903058168663556
}
},
"bencher::mock_4": {
"latency": {
"value": 44.9237520767597,
"lower_value": 40.43137686908373,
"upper_value": 49.41612728443567
}
}
}
View results:
- bencher::mock_0: https://bencher.dev/console/projects/save-walter-white-1234abcd/perf?measures=4358146b-b647-4869-9d24-bd22bb0c49b5&branches=95521eff-09fa-4c02-abe1-dd824108869d&testbeds=5b4a6f3e-a27d-4cc3-a2ce-851dc6421e6e&benchmarks=88375e7c-f1e0-4cbb-bde1-bdb7773022ae
- bencher::mock_1: https://bencher.dev/console/projects/save-walter-white-1234abcd/perf?measures=4358146b-b647-4869-9d24-bd22bb0c49b5&branches=95521eff-09fa-4c02-abe1-dd824108869d&testbeds=5b4a6f3e-a27d-4cc3-a2ce-851dc6421e6e&benchmarks=e81c7863-cc4b-4e22-b507-c1e238871137
- bencher::mock_2: https://bencher.dev/console/projects/save-walter-white-1234abcd/perf?measures=4358146b-b647-4869-9d24-bd22bb0c49b5&branches=95521eff-09fa-4c02-abe1-dd824108869d&testbeds=5b4a6f3e-a27d-4cc3-a2ce-851dc6421e6e&benchmarks=31dede44-d23a-4baf-b639-63f2ac742e42
- bencher::mock_3: https://bencher.dev/console/projects/save-walter-white-1234abcd/perf?measures=4358146b-b647-4869-9d24-bd22bb0c49b5&branches=95521eff-09fa-4c02-abe1-dd824108869d&testbeds=5b4a6f3e-a27d-4cc3-a2ce-851dc6421e6e&benchmarks=c7e32369-f3dd-473d-99a3-6289ae32b38e
- bencher::mock_4: https://bencher.dev/console/projects/save-walter-white-1234abcd/perf?measures=4358146b-b647-4869-9d24-bd22bb0c49b5&branches=95521eff-09fa-4c02-abe1-dd824108869d&testbeds=5b4a6f3e-a27d-4cc3-a2ce-851dc6421e6e&benchmarks=779bc477-4964-4bae-aa8c-4da3e388822c

You can now view the results from each of the benchmarks in the browser. Click or copy and paste the links from View results. There should only be a single data point for each benchmark, so lets add some more data!


First, lets set our Project slug as an environment variable, so we don’t have to provide it with the --project on every single run.

Run: export BENCHER_PROJECT=save-walter-white-1234abcd

If you then run: echo $BENCHER_PROJECT

You should see:

$ echo $BENCHER_PROJECT
save-walter-white-1234abcd

Lets rerun the same command again without --project to generate more data.

Run: bencher run "bencher mock"


Now, lets generate more data, but this time we will pipe our results into bencher run.

Run: bencher mock | bencher run


Sometimes you may want to save your results to a file and have bencher run pick them up.

Run: bencher run --file results.json "bencher mock > results.json"


Likewise, you may have a separate process run your benchmarks and save your results to a file. Then bencher run will just pick them up.

Run: bencher mock > results.json && bencher run --file results.json


Finally, lets seed a lot of data using the bencher run --iter argument.

Run: bencher run --iter 16 "bencher mock"


🐰 Tip: Checkout the bencher run CLI Subcommand docs for a full overview of all that bencher run can do!


Generate an Alert

Now that we have some historical data for our benchmarks, lets generate an Alert! Alerts are generated when a benchmark result is determined to be a performance regression. So lets simulate a performance regression!

Run: bencher run "bencher mock --pow 8"


There should be a new section at the end of the output called View alerts:

View alerts:
- bencher::mock_0: https://bencher.dev/console/projects/save-walter-white-1234abcd/alerts/b2329d5a-4471-48ab-bfbd-959d46ba1aa6
- bencher::mock_1: https://bencher.dev/console/projects/save-walter-white-1234abcd/alerts/181b1cf5-d984-402a-b0f1-68f6f119fa66
- bencher::mock_2: https://bencher.dev/console/projects/save-walter-white-1234abcd/alerts/b9b6c904-c657-4908-97db-dbeca40f8782
- bencher::mock_3: https://bencher.dev/console/projects/save-walter-white-1234abcd/alerts/5567ff32-2829-4b6a-969a-af33ce3828db
- bencher::mock_4: https://bencher.dev/console/projects/save-walter-white-1234abcd/alerts/49f2768f-ccda-4933-8e1d-08948f57a74d

You can now view the Alerts for each benchmark in the browser. Click or copy and paste the links from View alerts.


🐰 Tip: Checkout the Threshold & Alerts docs for a full overview of how performance regressions are detected!



🐰 Congrats! You caught your first perform regression with Bencher Self-Hosted! 🎉


Keep Going: How to Track Benchmarks in CI ➡



Published: Sat, August 12, 2023 at 4:07:00 PM UTC | Last Updated: Fri, November 29, 2024 at 6:30:00 PM UTC