How to use Bencher in GitHub Actions


on:
  push:
    branches: main

jobs:
  benchmark_with_bencher:
    name: Continuous Benchmarking with Bencher
    runs-on: ubuntu-latest
    env:
      BENCHER_PROJECT: save-walter-white
      BENCHER_API_TOKEN: ${{ secrets.BENCHER_API_TOKEN }}
      BENCHER_TESTBED: ubuntu-latest
      BENCHER_ADAPTER: json
    steps:
      - uses: actions/checkout@v3
      - uses: bencherdev/bencher@main
      - name: Track Benchmarks with Bencher
        run: |
          bencher run \
          --branch main \
          --err \
          "bencher mock"
  1. Create a GitHub Actions workflow file. (ex: .github/workflows/benchmark.yml)
  2. Run on push events to the main branch. See the GitHub Actions on documentation for a full overview. Also see Pull Requests below.
  3. Create a GitHub Actions job. (ex: benchmark_with_bencher)
  4. The Project must already exist. Set the --project flag or the BENCHER_PROJECT environment variable to the Project slug or UUID (ex: BENCHER_PROJECT: save-walter-white).
  5. The API token must already exist. Add BENCHER_API_TOKEN as a Repository secret. (ex: Repo -> Settings -> Secrets and variables -> Actions -> New repository secret) Set the --token flag or the BENCHER_API_TOKEN environment variable to the API token. (ex: BENCHER_API_TOKEN: ${{ secrets.BENCHER_API_TOKEN }})
  6. Optional: Set the --testbed flag or the BENCHER_TESTBED environment variable to the Testbed slug or UUID. (ex: BENCHER_TESTBED: ubuntu-latest) The Testbed must already exist. If this is not set, then the localhost Testbed will be used.
  7. Optional: Set the --adapter flag or the BENCHER_ADAPTER environment variable to the desired adapter name. (ex: BENCHER_ADAPTER: json) If this is not set, then the magic Adapter will be used. See benchmark harness adapters for a full overview.
  8. Checkout your source code. (ex: uses: actions/checkout@v3)
  9. Install the Bencher CLI using the GitHub Action. (ex: uses: bencherdev/bencher@main)
  10. Track your benchmarks with the bencher run CLI subcommand:
    1. Optional: Set the --branch flag or the BENCHER_BRANCH environment variable to the Branch slug or UUID. (ex: --branch main) The Branch must already exist. If this is not set, then the main Branch will be used.
    2. Set the command to fail if an Alert is generated. (ex: --err) In order for an Alert to be generated, a Threshold must already exist.
    3. Run your benchmarks and generate a Report from the results. (ex: "bencher mock")

Pull Requests

In order to catch performance regression in Pull Requests, you will need to run your benchmarks on PRs. If you only expect to have PRs from branches within the same repository then you can simply modify the example above to also run on pull_request events.

⚠️ This solution only works if all PRs are from the same repository! See Pull Requests from Forks below.

on:
  push:
    branches: main
  pull_request:

jobs:
  benchmark_with_bencher:
    name: Continuous Benchmarking with Bencher
    runs-on: ubuntu-latest
    env:
      BENCHER_PROJECT: save-walter-white
      BENCHER_API_TOKEN: ${{ secrets.BENCHER_API_TOKEN }}
      BENCHER_TESTBED: ubuntu-latest
      BENCHER_ADAPTER: json
    steps:
      - uses: actions/checkout@v3
      - uses: bencherdev/bencher@main
      - name: Track Benchmarks with Bencher
        run: |
          bencher run \
          --if-branch "$GITHUB_REF_NAME" \
          --else-if-branch "$GITHUB_BASE_REF" \
          --else-if-branch main \
          --github-actions ${{ secrets.GITHUB_TOKEN }} \
          --err \
          "bencher mock"
  1. Run on push events to the main branch and on pull_request events. It is important to limit the run on push only to the select branches (ex: main) to prevents pushes to PR branches from running twice!
  2. Instead of always using the main branch, use the GitHub Action default environment variables to:
    1. Use the current branch data if it already exists. (ex: --if-branch "$GITHUB_REF_NAME")
    2. Create a clone of the PR target branch data and thresholds if it already exists. (ex: --else-if-branch "$GITHUB_BASE_REF")
    3. Otherwise, create a clone of the main branch data and thresholds. (ex: --else-if-branch main)
    4. There are several options for setting the project branch. See branch selection for a full overview.
  3. Set the GitHub API authentication token. (ex: --github-actions ${{ secrets.GITHUB_TOKEN }}) When this option is set as a part of a pull request, then the results will be added to the pull request as a comment. This uses the GitHub Actions GITHUB_TOKEN environment variable.
  4. See the bencher run documentation for a full overview of all the ways to configure the pull request comment with the --ci-* flags.

Pull Requests from Forks

If you plan to accept pull requests from forks, as is often the case in public open source projects, then you will need to handle things a little differently. For security reasons, secrets such as your BENCHER_API_TOKEN and the GITHUB_TOKEN are not available in GitHub Actions for fork PRs. That is if an external contributor opens up a PR from a fork the above example will not work. There are three options for fork PRs:

Benchmark Fork PR from Target Branch

on:
  push:
    branches: main
  pull_request_target:

jobs:
  benchmark_main_with_bencher:
    if: github.event_name == 'push' && github.ref == 'refs/heads/main'
    name: Benchmark main with Bencher
    runs-on: ubuntu-latest
    env:
      BENCHER_PROJECT: save-walter-white
      BENCHER_API_TOKEN: ${{ secrets.BENCHER_API_TOKEN }}
      BENCHER_TESTBED: ubuntu-latest
      BENCHER_ADAPTER: json
    steps:
      - uses: actions/checkout@v3
      - uses: bencherdev/bencher@main
      - name: Track Benchmarks with Bencher
        run: |
          bencher run \
          --branch main \
          --err \
          "bencher mock"

  benchmark_pr_with_bencher:
    if: github.event_name == 'pull_request_target'
    name: Benchmark PR with Bencher
    runs-on: ubuntu-latest
    env:
      BENCHER_PROJECT: save-walter-white
      BENCHER_ADAPTER: json
      BENCHER_TESTBED: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
        with:
          ref: ${{ github.event.pull_request.head.sha }}
          repository: ${{ github.event.pull_request.head.repo.full_name }}
          persist-credentials: false
      - uses: bencherdev/bencher@main
      - name: Track Benchmarks with Bencher
        run: |
          bencher run \
          --if-branch "${{ github.event.pull_request.head.ref }}" \
          --else-if-branch "${{ github.event.pull_request.base.ref }}" \
          --else-if-branch main \
          --github-actions "${{ secrets.GITHUB_TOKEN }}" \
          --token "${{ secrets.BENCHER_API_TOKEN }}" \
          --err \
          "bencher mock"
  1. Run on push events to the main branch and on pull_request_target events.
  2. Create a job that only runs for push events to the main branch. Other than the if condition, this job is nearly identical to the original example above.
  3. Create a job that only runs for pull_request_target events.
    1. Checkout the pull request branch.
    2. Pass in all secrets directly. Use --token ${{ secrets.BENCHER_API_TOKEN }} instead of the BENCHER_API_TOKEN environment variable.
    3. Run and track your pull request benchmarks with bencher run.

This setup works because pull_request_target runs in the context of the pull request’s target branch, where secrets such as your BENCHER_API_TOKEN and the GITHUB_TOKEN are available. Therefore, this workflow will only run if it exists on the target branch. Avoid setting any secrets as environment variables, such as BENCHER_API_TOKEN. Instead explicitly pass in the API token to bencher run. (ex: --token ${{ secrets.BENCHER_API_TOKEN }}) See this GitHub Security Lab write up and this blog post on preventing pwn requests for a full overview.

Benchmark Fork PR from Target Branch with Required Reviewers

on:
  push:
    branches: main
  pull_request_target:

jobs:
  benchmark_main_with_bencher:
    if: github.event_name == 'push' && github.ref == 'refs/heads/main'
    name: Benchmark main with Bencher
    runs-on: ubuntu-latest
    env:
      BENCHER_PROJECT: save-walter-white
      BENCHER_API_TOKEN: ${{ secrets.BENCHER_API_TOKEN }}
      BENCHER_TESTBED: ubuntu-latest
      BENCHER_ADAPTER: json
    steps:
      - uses: actions/checkout@v3
      - uses: bencherdev/bencher@main
      - name: Track Benchmarks with Bencher
        run: |
          bencher run \
          --branch main \
          --err \
          "bencher mock"

  benchmark_pr_requires_review:
    if: github.event_name == 'pull_request_target'
    environment:
      ${{ (github.event.pull_request.head.repo.full_name == github.repository && 'internal') || 'external' }}
    runs-on: ubuntu-latest
    steps:
      - run: true

  benchmark_pr_with_bencher:
    needs: benchmark_pr_requires_review
    name: Benchmark PR with Bencher
    runs-on: ubuntu-latest
    env:
      BENCHER_PROJECT: save-walter-white
      BENCHER_ADAPTER: json
      BENCHER_TESTBED: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
        with:
          ref: ${{ github.event.pull_request.head.sha }}
          repository: ${{ github.event.pull_request.head.repo.full_name }}
          persist-credentials: false
      - uses: bencherdev/bencher@main
      - name: Track Benchmarks with Bencher
        run: |
          bencher run \
          --if-branch "${{ github.event.pull_request.head.ref }}" \
          --else-if-branch "${{ github.event.pull_request.base.ref }}" \
          --else-if-branch main \
          --github-actions "${{ secrets.GITHUB_TOKEN }}" \
          --token "${{ secrets.BENCHER_API_TOKEN }}" \
          --err \
          "bencher mock"

This setup is exactly the same as Benchmark Fork PR from Target Branch with the additional requirement of approval from a Required Reviewer before each fork pull request run. Pull requests from the same repository do not require approval. In order to set this up, you need to create two GitHub Action Environments (ex: Repo -> Settings -> Environments -> New environment). The internal environment should have no Deployment protection rules. However, the external environment should have Required reviewers set to those trusted to review fork PRs before benchmarking.

Benchmark Fork PR and Upload from Default Branch

name: Run and Cache Benchmarks

on: pull_request

jobs:
  benchmark:
    name: Run Benchmarks
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Mock Benchmark
        run: echo '{"bencher::mock_0": { "latency": { "value": 1.0 }}}' &> benchmark_results.txt
      - uses: actions/upload-artifact@v3
        with:
          name: benchmark_results.txt
          path: ./benchmark_results.txt
      - uses: actions/upload-artifact@v3
        with:
          name: pr_event.json
          path: ${{ env.GITHUB_EVENT_PATH }}
name: Track Benchmarks

on:
  workflow_run:
    workflows: [Run and Cache Benchmarks]
    types:
      - completed

jobs:
  track_with_bencher:
    if: ${{ github.event.workflow_run.conclusion == 'success' }}
    runs-on: ubuntu-latest
    env:
      BENCHER_PROJECT: save-walter-white
      BENCHER_API_TOKEN: ${{ secrets.BENCHER_API_TOKEN }}
      BENCHER_ADAPTER: json
      BENCHER_TESTBED: ubuntu-latest
      BENCHMARK_RESULTS: benchmark_results.txt
      PR_EVENT: pr_event.json
    steps:
      - name: Download Benchmark Results
        uses: actions/github-script@v6
        with:
          script: |
            function downloadArtifact(artifactName) {
              let allArtifacts = await github.rest.actions.listWorkflowRunArtifacts({
                owner: context.repo.owner,
                repo: context.repo.repo,
                run_id: context.payload.workflow_run.id,
              });
              let matchArtifact = allArtifacts.data.artifacts.filter((artifact) => {
                return artifact.name == artifactName
              })[0];
              if (!matchArtifact) {
                core.setFailed(`Failed to find artifact: ${artifactName}`);
              }
              let download = await github.rest.actions.downloadArtifact({
                owner: context.repo.owner,
                repo: context.repo.repo,
                artifact_id: matchArtifact.id,
                archive_format: 'zip',
              });
              let fs = require('fs');
              fs.writeFileSync(`${process.env.GITHUB_WORKSPACE}/${artifactName}.zip`, Buffer.from(download.data));
            }
            downloadArtifact(process.env.BENCHMARK_RESULTS);
            downloadArtifact(process.env.PR_EVENT);
      - name: Unzip Benchmark Results
        run: |
          unzip $BENCHMARK_RESULTS.zip
          unzip $PR_EVENT.zip
      - name: Export PR Context
        uses: actions/github-script@v6
        with:
          script: |
            let fs = require('fs');
            let prEvent = JSON.parse(fs.readFileSync(process.env.PR_EVENT, {encoding: 'utf8'}));
            fs.appendFileSync(process.env.GITHUB_ENV, `PR_NUMBER=${prEvent.number}`);
            fs.appendFileSync(process.env.GITHUB_ENV, `PR_HEAD=${prEvent.pull_request.head.ref}`);
            fs.appendFileSync(process.env.GITHUB_ENV, `PR_BASE=${prEvent.pull_request.base.ref}`);
      - uses: bencherdev/bencher@main
          - name: Track Benchmarks with Bencher
            run: |
              bencher run \
              --if-branch "${{ env.PR_HEAD }}" \
              --else-if-branch "${{ env.PR_BASE }}" \
              --else-if-branch main \
              --github-actions "${{ secrets.GITHUB_TOKEN }}" \
              --ci-number "${{ env.PR_NUMBER }}" \
              --err \
              --file $BENCHMARK_RESULTS
  1. Create a Run and Cache Benchmarks workflow file.
  2. Run your benchmarks on pull_request events.
  3. Save the benchmarks results to a file and upload them as an artifact.
  4. Upload the pull_request event as an artifact.
  5. Create a second workflow file, Track Benchmarks.
  6. Chain Track Benchmarks to Run and Cache Benchmarks with the workflow_run event.
  7. Extract necessary data from the cached pull_request event.
  8. Track the cached benchmark results with bencher run.
  9. Create a third workflow file and use the initial example above to run on push events to the main branch.

This setup works because workflow_run runs in the context of the repository’s default branch, where secrets such as your BENCHER_API_TOKEN and the GITHUB_TOKEN are available. Therefore, these workflows will only run if they exist on the default branch. See using data from the triggering workflow for a full overview. The pull request number, head branch, and base branch used in the initial workflow must be explicitly passed in since they are not available within workflow_run.



🐰 Congrats! You have learned how to use Bencher in GitHub Actions! 🎉


Keep Going: Benchmarking Overview ➡