bencher run CLI Subcommand
bencher run
is the most popular CLI subcommand.
It is used to run benchmarks and report the results.
As such, it is one of the most complicated subcommands.
This page will explain the options, flags, and arguments that can be passed to bencher run
.
Benchmark Command
The first argument to bencher run
is the optional benchmark command.
This is the command that will be executed, invoking your benchmark harness.
It can also be set using the BENCHER_CMD
environment variable.
By default this command is executed in a shell,
which can be configured with the --shell
and --flag
options.
Its output is parsed by a benchmark harness adapter,
which can be set using the --adapter
option.
However, if the benchmark harness outputs to a file then the --file
option
must also be used to specify the output file path.
Alternatively to track the size of the output file (ie binary size) instead of its contents,
use the --file-size
option to specify the output file path.
If you would prefer to not have the command executed in a shell, you can use the --exec
flag or simply provide additional arguments to your command as additional arguments to bencher run
.
Shell Form:
Exec Form:
The benchmark command can be run multiple times using the --iter
option,
and those results can be folded into a single result using the --fold
option.
If any of the iterations fail, then the entire command is considered to have failed
unless the --allow-failure
flag is set.
If the benchmark command is not specified but the --file
option is,
then bencher run
will just read from output file path instead.
Similarly, if the the benchmark command is not specified but the --file-size
option is,
then bencher run
will just read the size of the file at the given file path instead.
If neither the benchmark command, --file
option,
nor --file-size
option are specified,
then bencher run
will read from stdin
instead.
This allows you to save the output of another command to a file or pipe it into bencher run
.
Options
--project <PROJECT>
Either the --project
option or the BENCHER_PROJECT
environment variable
must be set to the slug or UUID of an already existing project.
If both are specified, the --project
option takes precedence over the BENCHER_PROJECT
environment variable.
--token <TOKEN>
Either the --token
option or the BENCHER_API_TOKEN
environment variable must be set to a valid API token.
If both are specified, the --token
option takes precedence over the BENCHER_API_TOKEN
environment variable.
Click here to create an API token
--branch <BRANCH>
--start-point <BRANCH>
--start-point-hash <HASH>
--start-point-max-versions <COUNT>
--start-point-clone-thresholds
--start-point-reset
See branch selection for a full overview.
--hash <HASH>
Optional: A 40-character SHA-1 git
commit hash.
If two reports have the same branch and hash, they will be considered to be from the same commit.
Therefore, they will have the same branch version number.
If not provided, the Bencher CLI tries to find the current git
hash.
It starts by looking for a git
repository in the current working directory.
If unsuccessful, it continues to its parent directory and retries all the way up to the root directory.
If a git
repository is found, then the current branch’s HEAD git
hash is used.
--no-hash
Optional: Do not try to find a git
commit hash.
This option conflicts with --hash
and overrides its default behavior of searching for a git
repository.
--testbed <TESTBED>
Optional: Either the --testbed
option or the BENCHER_TESTBED
environment variable
may be set to the name, slug, or UUID for a Testbed.
If the value specified is a name or slug and the Testbed does not already exist, it will be created for you.
However, if the value specified is a UUID then the Testbed must already exist.
If both are specified, the --testbed
option takes precedence over the BENCHER_TESTBED
environment variable.
If neither are specified then localhost
is used as the default testbed.
--threshold-measure <MEASURE>
--threshold-test <TEST>
--threshold-min-sample-size <SAMPLE_SIZE>
--threshold-max-sample-size <SAMPLE_SIZE>
--threshold-window <WINDOW>
--threshold-lower-boundary <BOUNDARY>
--threshold-upper-boundary <BOUNDARY>
--thresholds-reset
--err
See thresholds and alerts for a full overview.
--adapter <ADAPTER>
--average <AVERAGE>
--file <FILE>
--build-time
--file-size <FILE>
See benchmark harness adapter for a full overview.
--iter <COUNT>
Optional: Number of run iterations. The default is 1
.
--fold <AGGREGATE_FUNCTION>
Optional: Fold multiple results into a single result using an aggregate function.
Requires: --iter
to be set.
Possible values:
min
: Minimum valuemax
: Maximum valuemean
: Mean of valuesmedian
: Median of values
--backdate <SECONDS>
Optional: Backdate the report (seconds since epoch). NOTE: This will not effect the ordering of past reports! This is useful when initially seeding historical data into a project in chronological order.
--allow-failure
Optional: Allow benchmark test failure.
--format <FORMAT>
Optional: Format for the final Report.
The default is human
.
Possible values:
human
: Human-readable formatjson
: JSON formathtml
: HTML format
--quiet
Optional: Quite mode, only output the final Report.
Use the --format
option to change the output format.
--github-actions <GITHUB_TOKEN>
Optional: Set the GitHub API authentication token.
The most convenient way to do this is the GitHub Actions GITHUB_TOKEN
environment variable (ie --github-actions ${{ secrets.GITHUB_TOKEN }}
).
When this option is set and bencher run
is used in GitHub Actions as a part of a pull request,
then the results will be added to the pull request as a comment.
This requires the token to have the pull-requests
scope with write
permissions.
Otherwise, the results will be added to the commit as a GitHub Check.
This requires the token to have the checks
scope with write
permissions.
In either case, the results will also be added to the job summary.
🐰 If you are running inside of a Docker container within GitHub Action, you will need to pass in the following environment variables and mount the path specified by
GITHUB_EVENT_PATH
:
GITHUB_ACTIONS
GITHUB_EVENT_NAME
GITHUB_EVENT_PATH
GITHUB_SHA
--ci-only-thresholds
Optional: Only post results to CI if a Threshold exists for the Branch, Testbed, and Measure.
If no Thresholds exist, then nothing will be posted.
Requires: --github-actions
--ci-only-on-alert
Optional: Only start posting results to CI if an Alert is generated.
If an Alert is generated, then all follow up results will also be posted even if they don’t contain any Alerts.
Requires: --github-actions
--ci-id <ID>
Optional: Custom ID for posting results to CI.
By default, Bencher will automatically segment out results by the combination of: Project, Branch, Testbed, and Adapter.
Setting a custom ID is useful when Bencher is being run multiple times in the same CI workflow for the same Project, Branch, Testbed, and Adapter combination.
Requires: --github-actions
--ci-number <NUMBER>
Optional: Issue number for posting results to CI.
Bencher will try its best to detect the CI issue number needed to post results.
However, this isn’t always available in complex setups, like using workflow_run
in GitHub Actions.
Requires: --github-actions
--shell <SHELL>
Optional: Shell command path.
Defaults to /bin/sh
on Unix-like environments and cmd
on Windows.
--flag <FLAG>
Optional: Shell command flag.
Defaults to -c
on Unix-like environments and /C
on Windows.
--exec
Optional: Run command as an executable not a shell command.
Default if the number of arguments to bencher run
is greater than one.
--host <URL>
Optional: Backend host URL. Defaults to Bench Cloud: https://api.bencher.dev
--attempts <COUNT>
Optional: Max request retry attempts.
Defaults to 10
attempts.
--retry-after <SECONDS>
Optional: Initial seconds to wait between attempts (exponential backoff).
Defaults to 1
second.
--dry-run
Optional: Perform a dry run. This will not store any data to the backend. Neither a Report, Branch (as detailed in branch selection), nor Testbed will be created.
--help
Optional: Print help.
🐰 Congrats! You have learned the basics of
bencher run
! 🎉