Benchmark Harness Adapters


Adapters convert benchmark harness output into Bencher Metric Format (BMF) JSON. The adapters run on the API server when a new report is received. See the benchmarking overview for more details. An adapter can be specified for the bencher run CLI subcommand with the --adapter option. If no adapter is specified, the magic adapter is used by default.

🪄 Magic (default)

The Magic Adapter (magic) is a superset of all other adapters. For that reason, it is the default adapter for bencher run. However, the magic adapter should be used for exploration only.

For best results, you should specify a benchmark harness adapter:


{…} JSON

The JSON Adapter (json) expects Bencher Metric Format (BMF) JSON. It is perfect for integrating custom benchmark harnesses with Bencher. For more details see how to track custom benchmarks and the BMF JSON reference.

⚖️ File Size

The bencher run CLI subcommand can be used to track the file size (ie binary size) of your deliverables with the --file-size option. The --file-size option expects a file path to the file who’s size will be measured. Under the hood, bencher run outputs the results as Bencher Metric Format (BMF) JSON. It is therefore good practice to explicitly use the json adapter. For more details see how to track file size.

The file-size Measure (ie bytes (B)) is gathered. Only the file size value (ie value) is available. Neither lower_value nor upper_value are collected. The file-size Measure is not created by default for all Projects. However, when you use the --file-size option, this Measures will be automatically created for your Project.


#️⃣ C# DotNet

The C# DotNet Adapter (c_sharp_dot_net) expects BenchmarkDotNet output in JSON format (ie --exporters json). The latency Measure (ie nanoseconds (ns)) is gathered.

There are two options for the Metric:

  • mean (default): The lower_value and upper_value are one standard deviation below and above the mean (ie value) respectively.
  • median: The lower_value and upper_value are one interquartile range below and above the median (ie value) respectively.

This can be specified in the bencher run CLI subcommand with the --average option.


➕ C++ Catch2

The C++ Catch2 Adapter (cpp_catch2) expects Catch2 output. The latency Measure (ie nanoseconds (ns)) is gathered. The lower_value and upper_value are one standard deviation below and above the mean (ie value) respectively.

➕ C++ Google

The C++ Google Adapter (cpp_google) expects Google Benchmark output in JSON format (ie --benchmark_format=json). The latency Measure (ie nanoseconds (ns)) is gathered. Only the mean (ie value) is available. Neither lower_value nor upper_value are collected.


🕳 Go Bench

The Go Bench Adapter (go_bench) expects go test -bench output. The latency Measure (ie nanoseconds (ns)) is gathered. Only the mean (ie value) is available. Neither lower_value nor upper_value are collected.


☕️ Java JMH

The Java JMH Adapter (java_jmh) expects Java Microbenchmark Harness (JMH) output in JSON format (ie -rf json). Both latency and throughput Measures (ie nanoseconds (ns) and operations / second (ops/sec)) may be gathered. The lower_value and upper_value are the lower and upper confidence intervals for the mean (ie value) respectively.


🕸 JavaScript Benchmark

The JavaScript Benchmark Adapter (js_benchmark) expects Benchmark.js output. The throughput Measure (ie operations / second (ops/sec)) is gathered. The lower_value and upper_value are the relative margin of error below and above the median (ie value) respectively.

🕸 JavaScript Time

The JavaScript Time Adapter (js_time) expects console.time/console.timeEnd output. The latency Measure (ie nanoseconds (ns)) is gathered. Only the operation time (ie value) is available. Neither lower_value nor upper_value are collected.


🐍 Python ASV

The Python ASV Adapter (python_asv) expects airspeed velocity CLI asv run output. The latency Measure (ie nanoseconds (ns)) is gathered. The lower_value and upper_value are the interquartile range below and above the median (ie value) respectively.

🐍 Python Pytest

The Python Pytest Adapter (python_pytest) expects pytest-benchmark output in JSON format (ie --benchmark-json results.json). This JSON output is saved to a file, so you must use the bencher run CLI --file option to specify that file path (ie bencher run --file results.json "pipenv run pytest --benchmark-json results.json benchmarks.py"). The latency Measure (ie nanoseconds (ns)) is gathered.

There are two options for the Metric:

  • mean (default): The lower_value and upper_value are one standard deviation below and above the mean (ie value) respectively.
  • median: The lower_value and upper_value are one interquartile range below and above the median (ie value) respectively.

This can be specified in the bencher run CLI subcommand with the optional --average argument.


♦️ Ruby Benchmark

The Ruby Benchmark Adapter (ruby_benchmark) expects Benchmark module output for the #bm, #bmbm, and #benchmark methods. A label is required for each benchmark. The latency Measure (ie nanoseconds (ns)) is gathered. Only the reported value (ie value) is available. Neither lower_value nor upper_value are collected.


🦀 Rust Bench

The Rust Bench Adapter (rust_bench) expects libtest bench output. The latency Measure (ie nanoseconds (ns)) is gathered. The lower_value and upper_value are the deviation below and above the median (ie value) respectively.

🦀 Rust Criterion

The Rust Criterion Adapter (rust_criterion) expects Criterion output. The latency Measure (ie nanoseconds (ns)) is gathered. The lower_value and upper_value are the lower and upper bounds of either the slope (if available) or the mean (if not) (ie value) respectively.

🦀 Rust Iai

The Rust Iai Adapter (rust_iai) expects Iai output. The instructions, l1_access, l2_access, ram_access, and estimated_cycles Measures are gathered. Only the reported value (ie value) is available is available for these Measures. Neither lower_value nor upper_value are collected. The Measures for this adapter are not created by default for all projects. However, when you use this adapter, these Measures will be automatically created for your Project.

🦀 Rust Iai-Callgrind

The Rust Iai Adapter (rust_iai_callgrind) expects Iai-Callgrind output. The instructions, l1_access, l2_access, ram_access, total_accesses, and estimated_cycles Measures are gathered. Only the reported value (ie value) is available is available for these Measures. Neither lower_value nor upper_value are collected. The Measures for this adapter are not created by default for all projects. However, when you use this adapter, these Measures will be automatically created for your Project.


❯_️ Shell Hyperfine

The Shell Hyperfine Adapter (shell_hyperfine) expects Hyperfine output in JSON format (ie --export-json results.json). This JSON output is saved to a file, so you must use the bencher run CLI --file option to specify that file path (ie bencher run --file results.json "hyperfine --export-json results.json 'sleep 0.1'"). The latency Measure (ie nanoseconds (ns)) is gathered.

There are two options for the Metric:

  • mean (default): The lower_value and upper_value are one standard deviation below and above the mean (ie value) respectively.
  • median: The lower_value and upper_value are min and max values respectively.

This can be specified in the bencher run CLI subcommand with the --average option.



🐰 Congrats! You have learned all about benchmark harness adapters! 🎉


Keep Going: Thresholds & Alerts ➡



Published: Sat, August 12, 2023 at 4:07:00 PM UTC | Last Updated: Thu, May 9, 2024 at 5:17:00 PM UTC