Quick Start


What is Bencher?

Bencher is a suite of continuous benchmarking tools. Have you ever had a performance regression impact your users? Bencher could have prevented that from happening. Bencher allows you to detect and prevent performance regressions before they make it to production.

  • Run: Run your benchmarks locally or in CI using your favorite benchmarking tools. The bencher CLI simply wraps your existing benchmark harness and stores its results.
  • Track: Track the results of your benchmarks over time. Monitor, query, and graph the results using the Bencher web console based on the source branch, testbed, and metric kind.
  • Catch: Catch performance regressions in CI. Bencher uses state of the art, customizable analytics to detect performance regressions before they make it to production.

For the same reasons that unit tests are run in CI to prevent feature regressions, benchmarks should be run in CI with Bencher to prevent performance regressions. Performance bugs are bugs!


Install bencher CLI

In order to install the bencher CLI you will need to have cargo installed. Check to see if you have cargo installed.

Run: cargo --version

You should see something like:

$ cargo --version
cargo 1.65.0 (4bc8f24d3 2022-10-20)

It is okay if your version number is different. It’s just important that this command works. If not follow the instructions for installing cargo via rustup.

On Linux or macOS, run:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

With cargo installed, we can install the bencher CLI.

Run:

cargo install --git https://github.com/bencherdev/bencher --tag v0.3.18 --locked bencher_cli

You should see something like:

$ cargo install --git https://github.com/bencherdev/bencher --tag 0.3.18 --locked bencher_cli
  Installing bencher_cli v0.3.18 (/workspace/bencher/services/cli)
    Updating crates.io index

    Finished release [optimized] target(s) in 0.27s
  Installing /workspace/.cargo/bin/bencher
   Installed package `bencher_cli v0.3.18 (/workspace/bencher/services/cli)` (executable `bencher`)

Again, it is okay if your output is different. It’s just important that this command works.


Finally, lets test that we have the bencher CLI installed.

Run: bencher --version

You should see:

$ bencher —version
bencher 0.3.18

Create a Bencher Cloud Account

Bencher is open source and self-hostable. If you are interested in self-hosting, check out the Bencher Docker tutorial. For this tutorial though, we are going to use Bencher Cloud.

Signup for Bencher Cloud


Once you have created an account, you will need to confirm your email address. Check your email for a confirmation link. After that, you should be logged in to Bencher Cloud.


Create an API Token

In order to use the Bencher API, you will need to create an API token. Navigate to the Bencher Console. Hover over your name in top right corner. A dropdown menu should appear. Select Tokens. Once on the API Tokens page, click the ➕ Add button.

Add an API Token


Once you have created your new API token, you will need to copy it to your clipboard. In the terminal you plan to work in, export the API token as an environment variable.

Run: export BENCHER_API_TOKEN=YOUR_TOKEN

If you then run: echo $BENCHER_API_TOKEN

You should see:

$ echo $BENCHER_API_TOKEN
YOUR_TOKEN

🐰 Note: If you move to a different terminal, you will need to export the API token again.


Create a Project

Now that we have a user account and API token, we can create a Project. First, we need to know which organization our new project will belong to.

Run: bencher org ls

You should see something like:

$ bencher org ls
[
  {
    "name": "Saul Goodman",
    "slug": "saul-goodman",
    "uuid": "4581feb0-6cac-40a9-bd8a-d7865183b01e"
    "created": "2022-07-06T11:24:36Z",
    "modified": "2022-07-06T11:24:36Z"
  }
]

Your output should be slightly different than the above:

  • The uuid is pseudorandom
  • The name and slug will be based on your username
  • The created and modified timestamps will be from when you just signed up

We can now create a new Project inside of your Organization. Substitute your Organization slug for the --org argument (ie YOUR_ORG_SLUG) in the command below.

Run: bencher project create --org YOUR_ORG_SLUG --url http://www.savewalterwhite.com "Save Walter White"

You should see something like:

$ bencher project create --org saul-goodman --url http://www.savewalterwhite.com "Save Walter White"
{
  "organization": "4581feb0-6cac-40a9-bd8a-d7865183b01e",
  "name": "Save Walter White",
  "slug": "save-walter-white-12345",
  "uuid": "c6c2a8e8-685e-4413-9a19-5b79053a71b1"
  "url": "http://www.savewalterwhite.com",
  "public": true,
  "created": "2022-07-06T11:36:24Z",
  "modified": "2022-07-06T11:36:24Z"
}

Again, your output should be slightly different than the above. It’s just important that this command works. Take note of the Project slug field (ie save-walter-white-12345).


Run a Report

We are finally ready to collect some benchmark metrics! For simplicity’s sake, we will be using mock data in this tutorial.

Run: bencher mock

You should see something like:

$ bencher mock
{
  "bencher::mock_0": {
    "latency": {
      "value": 3.7865423396154463,
      "lower_value": 3.4078881056539014,
      "upper_value": 4.165196573576991
    }
  },
  "bencher::mock_1": {
    "latency": {
      "value": 16.398332128878437,
      "lower_value": 14.758498915990593,
      "upper_value": 18.03816534176628
    }
  },
  "bencher::mock_2": {
    "latency": {
      "value": 20.88091359871672,
      "lower_value": 18.792822238845048,
      "upper_value": 22.969004958588393
    }
  },
  "bencher::mock_3": {
    "latency": {
      "value": 33.88103801203782,
      "lower_value": 30.492934210834036,
      "upper_value": 37.2691418132416
    }
  },
  "bencher::mock_4": {
    "latency": {
      "value": 40.90515638867921,
      "lower_value": 36.81464074981129,
      "upper_value": 44.99567202754713
    }
  }
}

Your output should be slightly different than the above, as the data are pseudorandom. It’s just important that this command works.


Now lets run a report using mock benchmark metric data. Substitute your Project slug for the --project argument (ie YOUR_PROJECT_SLUG) in the command below.

Run: bencher run --project YOUR_PROJECT_SLUG "bencher mock"

You should see something like:

$ bencher run --project save-walter-white-12345 "bencher mock"
{
  "bencher::mock_0": {
    "latency": {
      "value": 0.15496641529475275,
      "lower_value": 0.13946977376527747,
      "upper_value": 0.17046305682422802
    }
  },
  "bencher::mock_1": {
    "latency": {
      "value": 18.648298578180437,
      "lower_value": 16.783468720362393,
      "upper_value": 20.513128435998482
    }
  },
  "bencher::mock_2": {
    "latency": {
      "value": 28.20328182167366,
      "lower_value": 25.382953639506294,
      "upper_value": 31.023610003841025
    }
  },
  "bencher::mock_3": {
    "latency": {
      "value": 34.45732560787596,
      "lower_value": 31.01159304708836,
      "upper_value": 37.903058168663556
    }
  },
  "bencher::mock_4": {
    "latency": {
      "value": 44.9237520767597,
      "lower_value": 40.43137686908373,
      "upper_value": 49.41612728443567
    }
  }
}

{
  "branch": "master",
  "end_time": "2023-07-18T14:21:27.796871Z",
  "results": [
    "{\n  \"bencher::mock_0\": {\n    \"latency\": {\n      \"value\": 0.15496641529475275,\n      \"lower_value\": 0.13946977376527747,\n      \"upper_value\": 0.17046305682422802\n    }\n  },\n  \"bencher::mock_1\": {\n    \"latency\": {\n      \"value\": 18.648298578180437,\n      \"lower_value\": 16.783468720362393,\n      \"upper_value\": 20.513128435998482\n    }\n  },\n  \"bencher::mock_2\": {\n    \"latency\": {\n      \"value\": 28.20328182167366,\n      \"lower_value\": 25.382953639506294,\n      \"upper_value\": 31.023610003841025\n    }\n  },\n  \"bencher::mock_3\": {\n    \"latency\": {\n      \"value\": 34.45732560787596,\n      \"lower_value\": 31.01159304708836,\n      \"upper_value\": 37.903058168663556\n    }\n  },\n  \"bencher::mock_4\": {\n    \"latency\": {\n      \"value\": 44.9237520767597,\n      \"lower_value\": 40.43137686908373,\n      \"upper_value\": 49.41612728443567\n    }\n  }\n}\n"
  ],
  "settings": {},
  "start_time": "2023-07-18T14:21:27.773930Z",
  "testbed": "base"
}
{
  "uuid": "5554a92c-6f5c-481d-bd47-990f0a9bac6d",
  "user": { ... },
  "project": { ... },
  "branch": { ... },
  "testbed": { ...},
  "start_time": "2023-07-18T14:21:27Z",
  "end_time": "2023-07-18T14:21:27Z",
  "adapter": "magic",
  "results": [
    [
      {
        "metric_kind": { ... },
        "threshold": null,
        "benchmarks": [ ... ]
      }
    ]
  ],
  "alerts": [],
  "created": "2023-07-18T14:21:27Z"
}

View results:
- bencher::mock_0: https://bencher.dev/console/projects/save-walter-white-12345/perf?metric_kinds=4358146b-b647-4869-9d24-bd22bb0c49b5&branches=95521eff-09fa-4c02-abe1-dd824108869d&testbeds=5b4a6f3e-a27d-4cc3-a2ce-851dc6421e6e&benchmarks=88375e7c-f1e0-4cbb-bde1-bdb7773022ae
- bencher::mock_1: https://bencher.dev/console/projects/save-walter-white-12345/perf?metric_kinds=4358146b-b647-4869-9d24-bd22bb0c49b5&branches=95521eff-09fa-4c02-abe1-dd824108869d&testbeds=5b4a6f3e-a27d-4cc3-a2ce-851dc6421e6e&benchmarks=e81c7863-cc4b-4e22-b507-c1e238871137
- bencher::mock_2: https://bencher.dev/console/projects/save-walter-white-12345/perf?metric_kinds=4358146b-b647-4869-9d24-bd22bb0c49b5&branches=95521eff-09fa-4c02-abe1-dd824108869d&testbeds=5b4a6f3e-a27d-4cc3-a2ce-851dc6421e6e&benchmarks=31dede44-d23a-4baf-b639-63f2ac742e42
- bencher::mock_3: https://bencher.dev/console/projects/save-walter-white-12345/perf?metric_kinds=4358146b-b647-4869-9d24-bd22bb0c49b5&branches=95521eff-09fa-4c02-abe1-dd824108869d&testbeds=5b4a6f3e-a27d-4cc3-a2ce-851dc6421e6e&benchmarks=c7e32369-f3dd-473d-99a3-6289ae32b38e
- bencher::mock_4: https://bencher.dev/console/projects/save-walter-white-12345/perf?metric_kinds=4358146b-b647-4869-9d24-bd22bb0c49b5&branches=95521eff-09fa-4c02-abe1-dd824108869d&testbeds=5b4a6f3e-a27d-4cc3-a2ce-851dc6421e6e&benchmarks=779bc477-4964-4bae-aa8c-4da3e388822c

You can now view the results from each of the benchmarks in the browser. Click or copy and paste the links from View results. There should only be a single data point for each benchmark, so lets add some more data!


First, lets set our Project slug as an environment variable, so we don’t have to provide it with the --project on every single run.

Run: export BENCHER_PROJECT=save-walter-white-12345

If you then run: echo $BENCHER_PROJECT

You should see:

$ echo $BENCHER_PROJECT
save-walter-white-12345

Lets rerun the same command again without --project to generate more data.

Run: bencher run "bencher mock"


Now, lets generate more data, but this time we will pipe our results into bencher run.

Run: bencher mock | bencher run


Sometimes you may want to save your results to a file and have bencher run pick them up.

Run: bencher run "bencher mock > results.json" --file results.json


Likewise, you may have a separate process run your benchmarks and save your results to a file. Then bencher run will just pick them up.

Run: bencher mock > results.json && bencher run --file results.json


Finally, lets seed a lot of data using the bencher run --iter argument.

Run: bencher run --iter 16 "bencher mock"


🐰 Tip: Checkout the bencher run CLI Subcommand docs for a full overview of all that bencher run can do!


Generate an Alert

Now that we have some historical data for our benchmarks, lets generate an Alert! Alerts are generated when a benchmark result is determined to be a performance regression. So lets simulate a performance regression!

Run: bencher run "bencher mock --pow 8"


There should be a new section at the end of the output called View alerts:

View alerts:
- bencher::mock_0: https://bencher.dev/console/projects/save-walter-white-12345/alerts/b2329d5a-4471-48ab-bfbd-959d46ba1aa6
- bencher::mock_1: https://bencher.dev/console/projects/save-walter-white-12345/alerts/181b1cf5-d984-402a-b0f1-68f6f119fa66
- bencher::mock_2: https://bencher.dev/console/projects/save-walter-white-12345/alerts/b9b6c904-c657-4908-97db-dbeca40f8782
- bencher::mock_3: https://bencher.dev/console/projects/save-walter-white-12345/alerts/5567ff32-2829-4b6a-969a-af33ce3828db
- bencher::mock_4: https://bencher.dev/console/projects/save-walter-white-12345/alerts/49f2768f-ccda-4933-8e1d-08948f57a74d

You can now view the Alerts for each benchmark in the browser. Click or copy and paste the links from View alerts.


🐰 Tip: Checkout the Threshold & Alerts docs for a full overview of how performance regressions are detected!



🐰 Congrats! You caught your first perform regression! 🎉


Keep Going: Benchmarking Overview ➡