bencher run CLI Subcommand

bencher run is the most popular CLI subcommand. It is used to run benchmarks and report the results. As such, it is one of the most complicated subcommands. This page will explain the options, flags, and arguments that can be passed to bencher run.

Benchmark Command

The first and only argument to bencher run is the optional benchmark command. This is the command that will be executed, invoking your benchmark harness. It can also be set using the BENCHER_CMD environment variable. The command is executed in a shell, which can be configured with the --shell and --flag options. Its output is parsed by a benchmark harness adapter, which can be set using the --adapter option. However, if the benchmark harness outputs to a file then the --file option must also be used to specify the output file path.

🐰 If your benchmark command is multiple words, then you must wrap it in quotes (ie bencher run "bencher mock").

The benchmark command can be run multiple times using the --iter option, and those results can be folded into a single result using the --fold option. If any of the iterations fail, then the entire command is considered to have failed unless the --allow-failure flag is set.

If the benchmark command is not specified but the --file option is, then bencher run will read from output file path instead. If both the benchmark command and --file option are not specified, then bencher run will read from stdin instead. This allows you to save the output of another command to a file or pipe it into bencher run, respectively.


--project <PROJECT>

Either the --project option or the BENCHER_PROJECT environment variable must be set to the slug or UUID of an already existing project. If both are defined, the --project option takes precedence over the BENCHER_PROJECT environment variable.

--token <TOKEN>

Either the --token option or the BENCHER_API_TOKEN environment variable must be set to a valid API token. If both are defined, the --token option takes precedence over the BENCHER_API_TOKEN environment variable.

--branch <BRANCH>

--if-branch <IF_BRANCH>

--else-if-branch <ELSE_IF_BRANCH>



See branch selection for a full overview.

--hash <HASH>

Optional: A 40-character SHA-1 commit hash. If two reports have the same branch and hash, they will be considered to be from the same commit. Therefore, they will have the same branch version number.

--testbed <TESTBED>

Optional: Either the --testbed option or the BENCHER_TESTBED environment variable may be set to the slug or UUID of an already existing testbed. If both are defined, the --testbed option takes precedence over the BENCHER_TESTBED environment variable. If neither are defined then localhost is used as the default testbed.

--adapter <ADAPTER>

--average <AVERAGE>

--file <FILE>

See benchmark harness adapter for a full overview.

--iter <ITER>

Optional: Number of run iterations. The default is 1.

--fold <FOLD>

Optional: Fold multiple results into a single result.
Requires: --iter to be set.
Possible values:

  • min: Minimum value
  • max: Maximum value
  • mean: Mean of values
  • median: Median of values


Optional: Backdate the report (seconds since epoch). NOTE: This will not effect the ordering of past reports! This is useful when initially seeding historical data into a project in chronological order.


Optional: Allow benchmark test failure.


Optional: Error when an alert is generated. See thresholds and alerts for a full overview.


Optional: Output results in HTML format.


Optional: Display the Benchmark Metrics and Boundary Limits. Requires: --github-actions


Optional: Only post results to CI if a Threshold exists for the Metric Kind, Branch, and Testbed. If no Thresholds exist, then nothing will be posted. Requires: --github-actions


Optional: Only start posting results to CI if an Alert is generated. If an Alert is generated, then follow up results, even if they don’t contain any Alerts, will also be posted. Requires: --github-actions

Optional: All links should be to public URLs that do not require a login. Requires: --github-actions


Optional: Custom ID for posting results to CI. By default, Bencher will automatically segment out results by the combination of: Project, Branch, Testbed, and Adapter. Setting a custom ID is useful when Bencher is being run multiple times in the same CI workflow for the same Project, Branch, Testbed, and Adapter combination. Requires: --github-actions


Optional: Issue number for posting results to CI. Bencher will try its best to detect the CI issue number needed to post results. However, this isn’t always available in complex setups, like using workflow_run in GitHub Actions. Requires: --github-actions


Optional: Set the GitHub API authentication token (ie --github-actions ${{ secrets.GITHUB_TOKEN }}). When this option is set and bencher run is used in GitHub Actions as a part of a pull request, then the results will be added to the pull request as a comment. The most convenient way to do this is the GitHub Actions GITHUB_TOKEN environment variable.

🐰 If you are running inside of a Docker container within GitHub Action, you will need to pass in the following environment variables and mount the path specified by GITHUB_EVENT_PATH:


--shell <SHELL>

Optional: Shell command path. Defaults to /bin/sh on Unix-like environments and cmd on Windows.

--flag <FLAG>

Optional: Shell command flag. Defaults to -c on Unix-like environments and /C on Windows.

--host <HOST>

Optional: Backend host URL. Defaults to

--attempts <ATTEMPTS>

Optional: Max request retry attempts. Defaults to 10.

--retry-after <RETRY_AFTER>

Optional: Initial seconds to wait between attempts (exponential backoff). The default is 1.


Optional: Perform a dry run. This will not store any data to the backend. Neither a Report nor Branch as detailed in branch selection will be created.



Optional: Print help.

🐰 Congrats! You have learned the basics of bencher run! πŸŽ‰

Keep Going: Branch Selection with bencher run ➑