Benchmark Harness Adapters


Adapters convert benchmark harness output into standardized JSON, Bencher Metric Format (BMF). The adapters run on the API server when a new report is received. See the benchmarking overview for a more in-depth explanation. They can be specified in the bencher run CLI subcommand with the --adapter option. If no adapter is specified, the magic adapter is used by default.

It is best to use the most specific adapter for your use case. This will provide both the most accurate and performant parsing. For example if you are parsing Rust libtest bench output, you should use the rust_bench adapter, and not the magic or rust adapter. See our Bencher perf page for a good comparison.

🪄 Magic (default)

The Magic Adapter (magic) is a superset of all other adapters. For that reason, it is the default adapter for bencher run, but it is best used for exploration only. In CI, you should use the most specific adapter for your use case.

{…} JSON

The JSON Adapter (json) expects BMF JSON. It is perfect for integrating custom benchmark harnesses with Bencher.

Example of BMF:

{
"benchmark_name": {
"latency": {
"value": 88.0,
"lower_value": 87.42,
"upper_value": 88.88
}
}
}

In this example, the key benchmark_name would be the name of a benchmark. Benchmark names can be any non-empty string up to 1024 characters. The benchmark_name object contains Measure names, slugs, or UUIDs as keys. In this example, latency is the slug for the Latency Measure. Each Project by default has a Latency (ie latency) and Throughput (ie throughput) Measure, which are measured in nanosecond (ns) and operations / second (ops/s) respectively. The Measure object contains a Metric with up to three values: value, lower_value, and upper_value. The lower_value and upper_value values are optional, and their calculation is benchmark harness specific.

In this example, the latency Measure object contains the following values:

  • A value of 88.0
  • A lower_value of 87.42
  • An upper_value of 88.88

If the BMF JSON is stored in a file, then you can use the bencher run CLI subcommand with the optional --file argument to specify that file path. This works both with a benchmark command (ex: bencher run "bencher mock > results.json" --file results.json) and without a benchmark command (ex: bencher mock > results.json && bencher run --file results.json).

description: Benchmark Name
type: object
additionalProperties:
description: Measure Name, Slug, or UUID
type: object
properties:
value:
type: number
format: float
required: true
lower_value:
type: number
format: float
required: false
upper_value:
type: number
format: float
required: false

🐰 Note: The bencher mock CLI subcommand generates mock BMF Metrics.

#️⃣ C#

The C# Adapter (c_sharp) is a superset of c_sharp_dot_net.

#️⃣ C# DotNet

The C# DotNet Adapter (c_sharp_dot_net) expects BenchmarkDotNet output in JSON format (ie --exporters json). The latency Measure (ie nanoseconds (ns)) is gathered.

There are two options for the Metric:

  • mean (default): The lower_value and upper_value are one standard deviation below and above the mean (ie value) respectively.
  • median: The lower_value and upper_value are one interquartile range below and above the median (ie value) respectively.

This can be specified in the bencher run CLI subcommand with the --average option.

➕ C++

The C++ Adapter (cpp) is a superset of cpp_catch2 and cpp_google.

➕ C++ Catch2

The C++ Catch2 Adapter (cpp_catch2) expects Catch2 output. The latency Measure (ie nanoseconds (ns)) is gathered. The lower_value and upper_value are one standard deviation below and above the mean (ie value) respectively.

➕ C++ Google

The C++ Google Adapter (cpp_google) expects Google Benchmark output in JSON format (ie --benchmark_format=json). The latency Measure (ie nanoseconds (ns)) is gathered. Only the mean (ie value) is available. There are no lower_value and upper_value.

🕳 Go

The Go Adapter (go) is a superset of go_bench.

🕳 Go Bench

The Go Bench Adapter (go_bench) expects go test -bench output. The latency Measure (ie nanoseconds (ns)) is gathered. Only the mean (ie value) is available. There are no lower_value and upper_value.

☕️ Java

The Java Adapter (java) is a superset of java_jmh.

☕️ Java JMH

The Java JMH Adapter (java_jmh) expects Java Microbenchmark Harness (JMH) output in JSON format (ie -rf json). Both latency and throughput Measures (ie nanoseconds (ns) and operations / second (ops/sec)) may be gathered. The lower_value and upper_value are the lower and upper confidence intervals for the mean (ie value) respectively.

🕸 JavaScript

The JavaScript Adapter (js) is a superset of js_benchmark and js_time.

🕸 JavaScript Benchmark

The JavaScript Benchmark Adapter (js_benchmark) expects Benchmark.js output. The throughput Measure (ie operations / second (ops/sec)) is gathered. The lower_value and upper_value are the relative margin of error below and above the median (ie value) respectively.

🕸 JavaScript Time

The JavaScript Time Adapter (js_time) expects console.time/console.timeEnd output. The latency Measure (ie nanoseconds (ns)) is gathered. Only the operation time (ie value) is available. There are no lower_value and upper_value.

🐍 Python

The Python Adapter (python) is a superset of python_asv and python_pytest.

🐍 Python ASV

The Python ASV Adapter (python_asv) expects airspeed velocity CLI asv run output. The latency Measure (ie nanoseconds (ns)) is gathered. The lower_value and upper_value are the interquartile range below and above the median (ie value) respectively.

🐍 Python Pytest

The Python Pytest Adapter (python_pytest) expects pytest-benchmark output in JSON format (ie --benchmark-json results.json). This JSON output is saved to a file, so you must use the bencher run CLI --file argument to specify that file path (ie bencher run --file results.json "pipenv run pytest --benchmark-json results.json benchmarks.py"). The latency Measure (ie nanoseconds (ns)) is gathered.

There are two options for the Metric:

  • mean (default): The lower_value and upper_value are one standard deviation below and above the mean (ie value) respectively.
  • median: The lower_value and upper_value are one interquartile range below and above the median (ie value) respectively.

This can be specified in the bencher run CLI subcommand with the optional --average argument.

♦️ Ruby

The Ruby Adapter (ruby) is a superset of ruby_benchmark.

♦️ Ruby Benchmark

The Ruby Benchmark Adapter (ruby_benchmark) expects Benchmark module output for the #bm, #bmbm, and #benchmark methods. A label is required for each benchmark. The latency Measure (ie nanoseconds (ns)) is gathered. Only the reported value (ie value) is available. There are no lower_value and upper_value.

🦀 Rust

The Rust Adapter (rust) is a superset of rust_bench and rust_criterion.

🦀 Rust Bench

The Rust Bench Adapter (rust_bench) expects libtest bench output. The latency Measure (ie nanoseconds (ns)) is gathered. The lower_value and upper_value are the deviation below and above the median (ie value) respectively.

🦀 Rust Criterion

The Rust Criterion Adapter (rust_criterion) expects Criterion output. The latency Measure (ie nanoseconds (ns)) is gathered. The lower_value and upper_value are the lower and upper bounds of either the slope (if available) or the mean (if not) (ie value) respectively.

🦀 Rust Iai

The Rust Iai Adapter (rust_iai) expects Iai output. The instructions, l1_access, l2_access, ram_access, and estimated_cycles Measures are gathered. Only these measures (ie value) are available. There are no lower_value and upper_value measures. The Measures for this adapter are not created by default for all projects. However, when you use this adapter, these Measures will be automatically created for your Project.

🦀 Rust Iai-Callgrind

The Rust Iai Adapter (rust_iai_callgrind) expects Iai-Callgrind output. The instructions, l1_access, l2_access, ram_access, total_accesses, and estimated_cycles Measures are gathered. Only these measures (ie value) are available. There are no lower_value and upper_value measures. The Measures for this adapter are not created by default for all projects. However, when you use this adapter, these Measures will be automatically created for your Project.

❯_ Shell

The Shell Adapter (shell) is a superset of shell_hyperfine.

❯_️ Shell Hyperfine

The Shell Hyperfine Adapter (shell_hyperfine) expects Hyperfine output in JSON format (ie --export-json results.json). This JSON output is saved to a file, so you must use the bencher run CLI --file argument to specify that file path (ie bencher run --file results.json "hyperfine --export-json results.json 'sleep 0.1'"). The latency Measure (ie nanoseconds (ns)) is gathered.

There are two options for the Metric:

  • mean (default): The lower_value and upper_value are one standard deviation below and above the mean (ie value) respectively.
  • median: The lower_value and upper_value are min and max values respectively.

This can be specified in the bencher run CLI subcommand with the --average option.



🐰 Congrats! You have learned all about benchmark harness adapters! 🎉


Keep Going: Thresholds & Alerts ➡



Published: Sat, August 12, 2023 at 4:07:00 PM UTC | Last Updated: Wed, March 27, 2024 at 7:50:00 AM UTC