Benchmark Harness Adapters
Adapters convert benchmark harness output into Bencher Metric Format (BMF) JSON.
The adapters run on the API server when a new report is received.
See the benchmarking overview for more details.
An adapter can be specified for the bencher run
CLI subcommand
with the --adapter
option.
If no adapter is specified, the magic
adapter is used by default.
🪄 Magic (default)
The Magic Adapter (magic
) is a superset of all other adapters.
For that reason, it is the default adapter for bencher run
.
However, the magic
adapter should be used for exploration only.
For best results, you should specify a benchmark harness adapter:
- {…} JSON
- #️⃣ C# BenchmarkDotNet
- ➕ C++ Catch2
- ➕ C++ Google Benchmark
- 🕳 Go test -bench
- ☕️ Java Microbenchmark Harness (JMH)
- 🕸 JavaScript Benchmark.js
- 🕸 JavaScript console.time/console.timeEnd
- 🐍 Python airspeed velocity (asv)
- 🐍 Python pytest-benchmark
- ♦️ Ruby Benchmark
- 🦀 Rust libtest bench
- 🦀 Rust Criterion
- 🦀 Rust Iai
- 🦀 Rust Iai-Callgrind
- ❯_ Shell Hyperfine
{…} JSON
The JSON Adapter (json
) expects Bencher Metric Format (BMF) JSON.
It is perfect for integrating custom benchmark harnesses with Bencher.
For more details see how to track custom benchmarks
and the BMF JSON reference.
⏱️ Build Time
The bencher run
CLI subcommand
can be used to track the build time (ie compile time) of your deliverables with the --build-time
flag.
Under the hood, bencher run
outputs the results as Bencher Metric Format (BMF) JSON.
It is therefore good practice to explicitly use the json
adapter.
For more details see how to track build time.
The build-time
Measure (ie seconds (s)
) is gathered.
Only the build time value (ie value
) is available.
Neither lower_value
nor upper_value
are collected.
The build-time
Measure is not created by default for all Projects.
However, when you use the --build-time
flag, this Measure will be automatically created for your Project.
⚖️ File Size
The bencher run
CLI subcommand
can be used to track the file size (ie binary size) of your deliverables with the --file-size
option.
The --file-size
option expects a file path to the file who’s size will be measured.
Under the hood, bencher run
outputs the results as Bencher Metric Format (BMF) JSON.
It is therefore good practice to explicitly use the json
adapter.
For more details see how to track file size.
The file-size
Measure (ie bytes (B)
) is gathered.
Only the file size value (ie value
) is available.
Neither lower_value
nor upper_value
are collected.
The file-size
Measure is not created by default for all Projects.
However, when you use the --file-size
option, this Measure will be automatically created for your Project.
The --file-size
option can be used multiple times to track multiple file sizes.
#️⃣ C# DotNet
The C# DotNet Adapter (c_sharp_dot_net
) expects BenchmarkDotNet output in JSON format (ie --exporters json
).
The latency
Measure (ie nanoseconds (ns)
) is gathered.
This JSON output is saved to a file, so you must use the bencher run
CLI --file
option to specify that file path.
There are two options for the Metric:
mean
(default): Thelower_value
andupper_value
are one standard deviation below and above the mean (ievalue
) respectively.median
: Thelower_value
andupper_value
are one interquartile range below and above the median (ievalue
) respectively.
This can be specified in the bencher run
CLI subcommand with the --average
option.
➕ C++ Catch2
The C++ Catch2 Adapter (cpp_catch2
) expects Catch2 output.
The latency
Measure (ie nanoseconds (ns)
) is gathered.
The lower_value
and upper_value
are one standard deviation below and above the mean (ie value
) respectively.
➕ C++ Google
The C++ Google Adapter (cpp_google
) expects Google Benchmark output in JSON format (ie --benchmark_format=json
).
The latency
Measure (ie nanoseconds (ns)
) is gathered.
Only the mean (ie value
) is available.
Neither lower_value
nor upper_value
are collected.
🕳 Go Bench
The Go Bench Adapter (go_bench
) expects go test -bench output.
The latency
Measure (ie nanoseconds (ns)
) is gathered.
Only the mean (ie value
) is available.
Neither lower_value
nor upper_value
are collected.
☕️ Java JMH
The Java JMH Adapter (java_jmh
) expects Java Microbenchmark Harness (JMH) output in JSON format (ie -rf json
).
This JSON output is saved to a file, so you must use the bencher run
CLI --file
option to specify that file path.
Both latency
and throughput
Measures (ie nanoseconds (ns)
and operations / second (ops/sec)
) may be gathered.
The lower_value
and upper_value
are the lower and upper confidence intervals for the mean (ie value
) respectively.
🕸 JavaScript Benchmark
The JavaScript Benchmark Adapter (js_benchmark
) expects Benchmark.js output.
The throughput
Measure (ie operations / second (ops/sec)
) is gathered.
The lower_value
and upper_value
are the relative margin of error below and above the median (ie value
) respectively.
🕸 JavaScript Time
The JavaScript Time Adapter (js_time
) expects console.time/console.timeEnd output.
The latency
Measure (ie nanoseconds (ns)
) is gathered.
Only the operation time (ie value
) is available.
Neither lower_value
nor upper_value
are collected.
🐍 Python ASV
The Python ASV Adapter (python_asv
) expects airspeed velocity CLI asv run output.
The latency
Measure (ie nanoseconds (ns)
) is gathered.
The lower_value
and upper_value
are the interquartile range below and above the median (ie value
) respectively.
🐍 Python Pytest
The Python Pytest Adapter (python_pytest
) expects pytest-benchmark output in JSON format (ie --benchmark-json results.json
).
This JSON output is saved to a file, so you must use the bencher run
CLI --file
option to specify that file path.
The latency
Measure (ie nanoseconds (ns)
) is gathered.
There are two options for the Metric:
mean
(default): Thelower_value
andupper_value
are one standard deviation below and above the mean (ievalue
) respectively.median
: Thelower_value
andupper_value
are one interquartile range below and above the median (ievalue
) respectively.
This can be specified in the bencher run
CLI subcommand with the --average
option.
♦️ Ruby Benchmark
The Ruby Benchmark Adapter (ruby_benchmark
) expects Benchmark module output for the #bm
, #bmbm
, and #benchmark
methods.
A label is required for each benchmark.
The latency
Measure (ie nanoseconds (ns)
) is gathered.
Only the reported value (ie value
) is available.
Neither lower_value
nor upper_value
are collected.
🦀 Rust Bench
The Rust Bench Adapter (rust_bench
) expects libtest bench output.
The latency
Measure (ie nanoseconds (ns)
) is gathered.
The lower_value
and upper_value
are the deviation below and above the median (ie value
) respectively.
🦀 Rust Criterion
The Rust Criterion Adapter (rust_criterion
) expects Criterion output.
The latency
Measure (ie nanoseconds (ns)
) is gathered.
The lower_value
and upper_value
are the lower and upper bounds of either the slope (if available) or the mean (if not) (ie value
) respectively.
🦀 Rust Iai
The Rust Iai Adapter (rust_iai
) expects Iai output.
The instructions
, l1-accesses
, l2-accesses
, ram-accesses
, and estimated-cycles
Measures are gathered.
Only the reported value (ie value
) is available is available for these Measures.
Neither lower_value
nor upper_value
are collected.
The Measures for this adapter are not created by default for all projects.
However, when you use this adapter, these Measures will be automatically created for your Project.
🦀 Rust Iai-Callgrind
The Rust Iai Adapter (rust_iai_callgrind
) expects Iai-Callgrind output.
The instructions
, l1-hits
, l2-hits
, ram-hits
, total-read-write
, and estimated-cycles
Measures are gathered.
Only the reported value (ie value
) is available is available for these Measures.
Neither lower_value
nor upper_value
are collected.
The Measures for this adapter are not created by default for all projects.
However, when you use this adapter, these Measures will be automatically created for your Project.
❯_️ Shell Hyperfine
The Shell Hyperfine Adapter (shell_hyperfine
) expects Hyperfine output in JSON format (ie --export-json results.json
).
This JSON output is saved to a file, so you must use the bencher run
CLI --file
option to specify that file path.
The latency
Measure (ie nanoseconds (ns)
) is gathered.
There are two options for the Metric:
mean
(default): Thelower_value
andupper_value
are one standard deviation below and above the mean (ievalue
) respectively.median
: Thelower_value
andupper_value
aremin
andmax
values respectively.
This can be specified in the bencher run
CLI subcommand with the --average
option.
🐰 Congrats! You have learned all about benchmark harness adapters! 🎉