How to use Bencher in GitHub Actions


Depending on your use case, you can set up Continuous Benchmarking in GitHub Actions for your:

Make sure you have created an API token and set it as a Repository secret named BENCHER_API_TOKEN before continuing on! Navigate to Your Repo -> Settings -> Secrets and variables -> Actions -> New repository secret. Name the secret BENCHER_API_TOKEN and set the secret value to your API token.

In GitHub Actions, secrets are not passed to the runner when a workflow is triggered from a forked repository. Therefore, you will need to use a branch from the same repository when adding any of the workflows below to your repository with a pull request. If you try to add Bencher with a pull request from a fork, then the BENCHER_API_TOKEN secret will not be available. ${{ secrets.BENCHER_API_TOKEN }} will be an empty string.

Base Branch

A cornerstone of Statistical Continuous Benchmarking is having a historical baseline for your base branch. This historical baseline can then be used to detect performance regressions in Pull Requests.

.github/workflows/base_benchmarks.yml
on:
push:
branches: main
jobs:
benchmark_base_branch:
name: Continuous Benchmarking with Bencher
permissions:
checks: write
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: bencherdev/bencher@main
- name: Track base branch benchmarks with Bencher
run: |
bencher run \
--project save-walter-white-1234abcd \
--token '${{ secrets.BENCHER_API_TOKEN }}' \
--branch main \
--testbed ubuntu-latest \
--threshold-measure latency \
--threshold-test t_test \
--threshold-max-sample-size 64 \
--threshold-upper-boundary 0.99 \
--thresholds-reset \
--err \
--adapter json \
--github-actions '${{ secrets.GITHUB_TOKEN }}' \
bencher mock
  1. Create a GitHub Actions workflow file. (ex: .github/workflows/base_benchmarks.yml)
  2. Run on push events to the main branch. See the GitHub Actions on documentation and GitHub Actions push documentation for a full overview. (ex: on: push: branches: main)
  3. Create a GitHub Actions job. (ex: jobs: benchmark_base_branch)
  4. Set the permissions for the GITHUB_TOKEN to write for checks. (ex: permissions: checks: write)
  5. Set the type of machine the job will run on. See the GitHub Actions runs-on documentation for a full overview. (ex: runs-on: ubuntu-latest)
  6. Checkout your base branch source code. (ex: uses: actions/checkout@v4)
  7. Install the Bencher CLI using the GitHub Action. (ex: uses: bencherdev/bencher@main)
  8. Use the bencher run CLI subcommand to run your main branch benchmarks. See the bencher run CLI subcommand for a full overview. (ex: bencher run)
  9. Set the --project option to the Project slug. See the --project docs for more details. (ex: --project save-walter-white-1234abcd)
  10. Set the --token option to the BENCHER_API_TOKEN Repository secret. See the --token docs for more details. (ex: --token '${{ secrets.BENCHER_API_TOKEN }}')
  11. Set the --branch option to the base Branch name. See the --branch docs for a full overview. (ex: --branch main)
  12. Set the --testbed option to the Testbed name. This should likely match the machine selected in runs-on. See the --tested docs for more details. (ex: --testbed ubuntu-latest)
  13. Set the Threshold for the main Branch, ubuntu-latest Testbed, and latency Measure:
    1. Set the --threshold-measure option to the built-in latency Measure that is generated by bencher mock. See the --threshold-measure docs for more details. (ex: --threshold-measure latency)
    2. Set the --threshold-test option to a Student’s t-test (t_test). See the --threshold-test docs for a full overview. (ex: --threshold-test t_test)
    3. Set the --threshold-max-sample-size option to the maximum sample size of 64. See the --threshold-max-sample-size docs for more details. (ex: --threshold-max-sample-size 64)
    4. Set the --threshold-upper-boundary option to the Upper Boundary of 0.99. See the --threshold-upper-boundary docs for more details. (ex: --threshold-upper-boundary 0.99)
    5. Set the --thresholds-reset flag so that only the specified Threshold is active. See the --thresholds-reset docs for a full overview. (ex: --thresholds-reset)
  14. Set the --err flag to fail the command if an Alert is generated. See the --err docs for a full overview. (ex: --err)
  15. Set the --adapter option to Bencher Metric Format JSON (json) that is generated by bencher mock. See benchmark harness adapters for a full overview. (ex: --adapter json)
  16. Set the --github-actions option to the GitHub API authentication token to post results as a GitHub Checks comment using the GitHub Actions GITHUB_TOKEN environment variable. See the --github-actions docs for more details. (ex: --github-actions '${{ secrets.GITHUB_TOKEN }}')
  17. Specify the benchmark command arguments. See benchmark command for a full overview. (ex: bencher mock)

Pull Requests

In order to catch performance regression in Pull Requests, you will need to run your benchmarks on PRs. If you only expect to have PRs from branches within the same repository then you can simply create another workflow to run on pull_request events from the same repository.

⚠️ This solution only works if all PRs are from the same repository! See Pull Requests from Forks below.

.github/workflows/pr_benchmarks.yml
on:
pull_request:
types: [opened, reopened, edited, synchronize]
jobs:
benchmark_pr_branch:
name: Continuous Benchmarking PRs with Bencher
# DO NOT REMOVE: For handling Fork PRs see Pull Requests from Forks
if: github.event_name == 'pull_request' && github.event.pull_request.head.repo.full_name == github.repository
permissions:
pull-requests: write
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: bencherdev/bencher@main
- name: Track PR Benchmarks with Bencher
run: |
bencher run \
--project save-walter-white-1234abcd \
--token '${{ secrets.BENCHER_API_TOKEN }}' \
--branch "$GITHUB_HEAD_REF" \
--start-point "$GITHUB_BASE_REF" \
--start-point-hash '${{ github.event.pull_request.base.sha }}' \
--start-point-clone-thresholds \
--start-point-reset \
--testbed ubuntu-latest \
--err \
--adapter json \
--github-actions '${{ secrets.GITHUB_TOKEN }}' \
bencher mock
  1. Create a GitHub Actions workflow file. (ex: .github/workflows/pr_benchmarks.yml)

  2. Run on pull_request events:

    • opened - A pull request was created.
    • reopened - A previously closed pull request was reopened.
    • edited - The title or body of a pull request was edited, or the base branch of a pull request was changed.
    • synchronize - A pull request’s head branch was updated. For example, the head branch was updated from the base branch or new commits were pushed to the head branch.

    See the GitHub Actions on documentation and GitHub Actions pull_request documentation for a full overview. (ex: on: pull_request: types: [opened, reopened, edited, synchronize])

  3. Create a GitHub Actions job. (ex: jobs: benchmark_pr_branch)

  4. Run on pull_request events if and only if the pull request is from the same repository. ⚠️ DO NOT REMOVE THIS LINE! For handling Fork PRs see Pull Requests from Forks below. (ex: if: github.event_name == 'pull_request' && github.event.pull_request.head.repo.full_name == github.repository)

  5. Set the permissions for the GITHUB_TOKEN to write for pull-requests. Depending on your GitHub settings, this may not be required. But for all organizations and personal repos created after 02 Feb 2023, this is the default behavior. See the GitHub documentation for a full overview. (ex: permissions: pull-requests: write)

  6. Set the type of machine the job will run on. See the GitHub Actions runs-on documentation for a full overview. (ex: runs-on: ubuntu-latest)

  7. Checkout the PR branch source code. (ex: uses: actions/checkout@v4)

  8. Install the Bencher CLI using the GitHub Action. (ex: uses: bencherdev/bencher@main)

  9. Use the bencher run CLI subcommand to run your pull request branch benchmarks. See the bencher run CLI subcommand for a full overview. (ex: bencher run)

  10. Set the --project option to the Project slug. See the --project docs for more details. (ex: --project save-walter-white-1234abcd)

  11. Set the --token option to the BENCHER_API_TOKEN Repository secret. See the --token docs for more details. (ex: --token '${{ secrets.BENCHER_API_TOKEN }}')

  12. Set the --branch option to the PR branch name using the GitHub Actions GITHUB_HEAD_REF default environment variable. See the --branch docs for a full overview. (ex: --branch "$GITHUB_HEAD_REF")

  13. Set the Start Point for the PR Branch:

    1. Set the --start-point option to the PR Branch start point using the GitHub Actions GITHUB_BASE_REF default environment variable. See the --start-point docs for a full overview. (ex: --start-point "$GITHUB_BASE_REF")
    2. Set the --start-point-hash option to the PR Branch start point git hash using the GitHub Actions pull_request event. See the --start-point-hash docs for a full overview. (ex: --start-point-hash '${{ github.event.pull_request.base.sha }}')
    3. Set the --start-point-clone-thresholds flag to clone the Thresholds from the start point. See the --start-point-clone-thresholds docs for a full overview. (ex: --start-point-clone-thresholds)
    4. Set the --start-point-reset flag to always reset the PR Branch to the start point. This will prevent benchmark data drift. See the --start-point-reset docs for a full overview. (ex: --start-point-reset)
  14. Set the --testbed option to the Testbed name. This should likely match the machine selected in runs-on. See the --tested docs for more details. (ex: --testbed ubuntu-latest)

  15. Set the --err flag to fail the command if an Alert is generated. See the --err docs for a full overview. (ex: --err)

  16. Set the --adapter option to Bencher Metric Format JSON (json) that is generated by bencher mock. See benchmark harness adapters for a full overview. (ex: --adapter json)

  17. Set the --github-actions option to the GitHub API authentication token to post results as a comment on the Pull Request using the GitHub Actions GITHUB_TOKEN environment variable. See the --github-actions docs for more details. (ex: --github-actions '${{ secrets.GITHUB_TOKEN }}')

  18. Specify the benchmark command arguments. See benchmark command for a full overview. (ex: bencher mock)

To clean up the PR branch after its PR is closed, you can create a separate workflow to run on pull_request events with the closed type. This workflow will archive the PR branch using the bencher archive command.

.github/workflows/pr_benchmarks_closed.yml
on:
pull_request:
types: [closed]
jobs:
archive_pr_branch:
name: Archive closed PR branch with Bencher
# DO NOT REMOVE: For handling Fork PRs see Pull Requests from Forks
if: github.event_name == 'pull_request' && github.event.pull_request.head.repo.full_name == github.repository
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: bencherdev/bencher@main
- name: Archive closed PR branch with Bencher
run: |
bencher archive \
--project save-walter-white-1234abcd \
--token '${{ secrets.BENCHER_API_TOKEN }}' \
--branch "$GITHUB_HEAD_REF"
  1. Create a GitHub Actions workflow file. (ex: .github/workflows/pr_benchmarks_closed.yml)

  2. Run on pull_request events:

    • closed - A pull request was closed.

    See the GitHub Actions on documentation and GitHub Actions pull_request documentation for a full overview. (ex: on: pull_request: types: [closed])

  3. Create a GitHub Actions job. (ex: jobs: archive_pr_branch)

  4. Run on pull_request events if and only if the pull request is from the same repository. ⚠️ DO NOT REMOVE THIS LINE! For handling Fork PRs see Pull Requests from Forks below. (ex: if: github.event_name == 'pull_request' && github.event.pull_request.head.repo.full_name == github.repository)

  5. Set the type of machine the job will run on. See the GitHub Actions runs-on documentation for a full overview. (ex: runs-on: ubuntu-latest)

  6. Checkout the PR branch source code. (ex: uses: actions/checkout@v4)

  7. Install the Bencher CLI using the GitHub Action. (ex: uses: bencherdev/bencher@main)

  8. Use the bencher archive CLI subcommand to archive the PR branch. (ex: bencher archive)

  9. Set the --project option to the Project slug. See the --project docs for more details. (ex: --project save-walter-white-1234abcd)

  10. Set the --token option to the BENCHER_API_TOKEN Repository secret. See the --token docs for more details. (ex: --token '${{ secrets.BENCHER_API_TOKEN }}')

  11. Set the --branch option to the PR branch name using the GitHub Actions GITHUB_HEAD_REF default environment variable. (ex: --branch "$GITHUB_HEAD_REF")


Pull Requests from Forks

If you plan to accept pull requests from forks, as is often the case in public open source projects, then you will need to handle things a little differently. For security reasons, secrets such as your BENCHER_API_TOKEN and the GITHUB_TOKEN are not available in GitHub Actions for fork PRs. That is if an external contributor opens up a PR from a fork the above example will not work. See this GitHub Security Lab write up and this blog post on preventing pwn requests for a full overview.

This is the safe and suggested way to add Continuous Benchmarking to fork pull requests. It requires two separate workflows. The first workflow runs and caches the benchmark results in the pull_request context. No secrets such as your BENCHER_API_TOKEN and the GITHUB_TOKEN are available there. Then a second workflow downloads the cached benchmark results in the workflow_run context and uploads them to Bencher. This works because workflow_run runs in the context of the repository’s default branch, where secrets such as your BENCHER_API_TOKEN and the GITHUB_TOKEN are available. The pull request number, head branch, and base branch used in the initial pull_request workflow must also be explicitly passed into the workflow_run workflow since they are not available there. These workflows will only run if they exist on the default branch. See using data from the triggering workflow for a full overview.

.github/workflows/fork_pr_benchmarks_run.yml
name: Run Benchmarks
on:
pull_request:
types: [opened, reopened, edited, synchronize]
jobs:
benchmark_fork_pr_branch:
name: Run Fork PR Benchmarks
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Mock Benchmarking
run: |
/bin/echo '{ "bencher::mock_0": { "latency": { "value": 1.0 } } }' > benchmark_results.json
- name: Upload Benchmark Results
uses: actions/upload-artifact@v4
with:
name: benchmark_results.json
path: ./benchmark_results.json
- name: Upload GitHub Pull Request Event
uses: actions/upload-artifact@v4
with:
name: event.json
path: ${{ github.event_path }}
  1. Create a first GitHub Actions workflow file. (ex: .github/workflows/fork_pr_benchmarks_run.yml)

  2. Name this workflow so it can be referenced by the second workflow. (ex: name: Run Benchmarks)

  3. Run on pull_request events:

    • opened - A pull request was created.
    • reopened - A previously closed pull request was reopened.
    • edited - The title or body of a pull request was edited, or the base branch of a pull request was changed.
    • synchronize - A pull request’s head branch was updated. For example, the head branch was updated from the base branch or new commits were pushed to the head branch.

    See the GitHub Actions on documentation and GitHub Actions pull_request documentation for a full overview. (ex: on: pull_request: types: [opened, reopened, edited, synchronize])

  4. Create a GitHub Actions job. (ex: jobs: benchmark_fork_pr_branch)

  5. Set the type of machine the job will run on. See the GitHub Actions runs-on documentation for a full overview. (ex: runs-on: ubuntu-latest)

  6. Checkout the fork PR branch source code. (ex: uses: actions/checkout@v4)

  7. Run your benchmarks and save the results to a file. (ex: /bin/echo '{ ... }' > benchmark_results.json)

  8. Upload the benchmark results file as an artifact. (ex: uses: actions/upload-artifact@v4)

  9. Upload the pull_request event object as an artifact. (ex: uses: actions/upload-artifact@v4)

.github/workflows/fork_pr_benchmarks_track.yml
name: Track Benchmarks with Bencher
on:
workflow_run:
workflows: [Run Benchmarks]
types: [completed]
jobs:
track_fork_pr_branch:
if: github.event.workflow_run.conclusion == 'success'
runs-on: ubuntu-latest
env:
BENCHMARK_RESULTS: benchmark_results.json
PR_EVENT: event.json
steps:
- name: Download Benchmark Results
uses: dawidd6/action-download-artifact@v6
with:
name: ${{ env.BENCHMARK_RESULTS }}
run_id: ${{ github.event.workflow_run.id }}
- name: Download PR Event
uses: dawidd6/action-download-artifact@v6
with:
name: ${{ env.PR_EVENT }}
run_id: ${{ github.event.workflow_run.id }}
- name: Export PR Event Data
uses: actions/github-script@v6
with:
script: |
let fs = require('fs');
let prEvent = JSON.parse(fs.readFileSync(process.env.PR_EVENT, {encoding: 'utf8'}));
core.exportVariable("PR_HEAD", prEvent.pull_request.head.ref);
core.exportVariable("PR_BASE", prEvent.pull_request.base.ref);
core.exportVariable("PR_BASE_SHA", prEvent.pull_request.base.sha);
core.exportVariable("PR_NUMBER", prEvent.number);
- uses: bencherdev/bencher@main
- name: Track Benchmarks with Bencher
run: |
bencher run \
--project save-walter-white-1234abcd \
--token '${{ secrets.BENCHER_API_TOKEN }}' \
--branch "$PR_HEAD" \
--start-point "$PR_BASE" \
--start-point-hash "$PR_BASE_SHA" \
--start-point-clone-thresholds \
--start-point-reset \
--testbed ubuntu-latest \
--err \
--adapter json \
--github-actions '${{ secrets.GITHUB_TOKEN }}' \
--ci-number "$PR_NUMBER" \
--file "$BENCHMARK_RESULTS"
  1. Create a first GitHub Actions workflow file. (ex: .github/workflows/fork_pr_benchmarks_track.yml)
  2. Name this workflow second workflow. (ex: name: Track Benchmarks with Bencher)
  3. Chain the two workflows with the workflow_run event. (ex: on: workflow_run: ...)
  4. Create a GitHub Actions job. (ex: jobs: track_fork_pr_branch)
  5. Only run this job if the previous workflow’s conclusion was a success using the GitHub Actions workflow_run event. (ex: if: github.event.workflow_run.conclusion == 'success')
  6. Set the type of machine the job will run on. See the GitHub Actions runs-on documentation for a full overview. (ex: runs-on: ubuntu-latest)
  7. Set the benchmark results and pull_request event object file names as environment variables. (ex: env: ...)
  8. Download the cached benchmark results and pull_request event using the action-download-artifact GitHub Action. (ex: uses: dawidd6/action-download-artifact@v6)
  9. Export the necessary data from the pull_request event as intermediate environment variables. (ex: core.exportVariable(...))
  10. Install the Bencher CLI using the GitHub Action. (ex: uses: bencherdev/bencher@main)
  11. Use the bencher run CLI subcommand to track your fork pull branch benchmarks. See the bencher run CLI subcommand for a full overview. (ex: bencher run)
  12. Set the --project option to the Project slug. See the --project docs for more details. (ex: --project save-walter-white-1234abcd)
  13. Set the --token option to the BENCHER_API_TOKEN Repository secret. See the --token docs for more details. (ex: --token '${{ secrets.BENCHER_API_TOKEN }}')
  14. Set the --branch option to the fork PR branch name using an intermediate environment variable. See the --branch docs for a full overview. (ex: --branch "$PR_HEAD")
  15. Set the Start Point for the fork PR Branch:
    1. Set the --start-point option to the fork PR Branch start point using an intermediate environment variable. See the --start-point docs for a full overview. (ex: --start-point "$PR_BASE")
    2. Set the --start-point-hash option to the fork PR Branch start point git hash using an intermediate environment variable. See the --start-point-hash docs for a full overview. (ex: --start-point-hash "$PR_BASE_SHA")
    3. Set the --start-point-clone-thresholds flag to clone the Thresholds from the start point. See the --start-point-clone-thresholds docs for a full overview. (ex: --start-point-clone-thresholds)
    4. Set the --start-point-reset flag to always reset the fork PR Branch to the start point. This will prevent benchmark data drift. See the --start-point-reset docs for a full overview. (ex: --start-point-reset)
  16. Set the --testbed option to the Testbed name. This should likely match the machine selected in runs-on. See the --tested docs for more details. (ex: --testbed ubuntu-latest)
  17. Set the --err flag to fail the command if an Alert is generated. See the --err docs for a full overview. (ex: --err)
  18. Set the --adapter option to Bencher Metric Format JSON (json) that is generated by bencher mock. See benchmark harness adapters for a full overview. (ex: --adapter json)
  19. Set the --github-actions option to the GitHub API authentication token to post results as a comment on the Pull Request using the GitHub Actions GITHUB_TOKEN environment variable. See the --github-actions docs for more details. (ex: --github-actions '${{ secrets.GITHUB_TOKEN }}')
  20. Set the --ci-number option to the pull request number using an intermediate environment variable. See the --ci-number docs for more details. (ex: --ci-number "$PR_NUMBER")
  21. Set the --file option to the benchmark results file path. See benchmark command for a full overview. (ex: --file "$BENCHMARK_RESULTS")

To clean up the fork PR branch after its PR is closed, you can create a separate workflow to run on pull_request_target events with the closed type. This workflow will archive the fork PR branch using the bencher archive command.

.github/workflows/fork_pr_benchmarks_closed.yml
on:
pull_request_target:
types: [closed]
jobs:
archive_fork_pr_branch:
name: Archive closed fork PR branch with Bencher
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: bencherdev/bencher@main
- name: Archive closed fork PR branch with Bencher
run: |
bencher archive \
--project save-walter-white-1234abcd \
--token '${{ secrets.BENCHER_API_TOKEN }}' \
--branch "$GITHUB_HEAD_REF"
  1. Create a GitHub Actions workflow file. (ex: .github/workflows/fork_pr_benchmarks_closed.yml)

  2. Run on pull_request_target events:

    • closed - A pull request was closed.

    See the GitHub Actions on documentation and GitHub Actions pull_request_target documentation for a full overview. (ex: on: pull_request_target: types: [closed])

  3. Create a GitHub Actions job. (ex: jobs: archive_pr_branch)

  4. Set the type of machine the job will run on. See the GitHub Actions runs-on documentation for a full overview. (ex: runs-on: ubuntu-latest)

  5. Checkout the PR branch source code. (ex: uses: actions/checkout@v4)

  6. Install the Bencher CLI using the GitHub Action. (ex: uses: bencherdev/bencher@main)

  7. Use the bencher archive CLI subcommand to archive the PR branch. (ex: bencher archive)

  8. Set the --project option to the Project slug. See the --project docs for more details. (ex: --project save-walter-white-1234abcd)

  9. Set the --token option to the BENCHER_API_TOKEN Repository secret. See the --token docs for more details. (ex: --token '${{ secrets.BENCHER_API_TOKEN }}')

  10. Set the --branch option to the PR branch name using the GitHub Actions GITHUB_HEAD_REF default environment variable. (ex: --branch "$GITHUB_HEAD_REF")



🐰 Congrats! You have learned how to use Bencher in GitHub Actions! 🎉


Keep Going: Benchmarking Overview ➡



Published: Sat, August 12, 2023 at 4:07:00 PM UTC | Last Updated: Mon, November 4, 2024 at 7:40:00 AM UTC