Benchmark Harness Adapters
Adapters convert benchmark harness output into standardized JSON, Bencher Metric Format (BMF).
The adapters run on the API server when a new report is received.
See the benchmarking overview for a more in-depth explanation.
They can be specified in the bencher run
CLI subcommand with the optional --adapter
flag.
If no adapter is specified, the magic
adapter is used by default.
It is best to use the most specific adapter for your use case.
This will provide both the most accurate and performant parsing.
For example if you are parsing Rust
libtest bench
output, you should use the rust_bench
adapter, and not the magic
or rust
adapter.
See our
Bencher perf page
for a good comparison.
🪄 Magic (default)
The Magic Adapter (magic
) is a superset of all other adapters.
For that reason, it is the default adapter for bencher run
,
but it is best used for exploration only.
In CI, you should use the most specific adapter for your use case.
{…} JSON
The JSON Adapter (json
) expects BMF JSON.
It is perfect for integrating custom benchmark harnesses with Bencher.
Example of BMF:
{
"benchmark_name": {
"latency": {
value: 88.0,
lower_value: 87.42,
upper_value: 88.88
}
}
}
In this example, the key benchmark_name
would be the name of a benchmark.
Benchmark names can be any non-empty string up to 1024 characters.
The benchmark_name
object contains Metric Kind slugs or UUIDs as keys.
In this example, latency
is the slug for the Latency Metric Kind.
Each Project by default has a Latency (ie latency
) and Throughput (ie throughput
) Metric Kind,
which are measured in nanosecond (ns)
and operations / second (ops/s)
respectively.
The Metric Kind object contains a Metric with up to three measures: value
, lower_value
, and upper_value
.
The lower_value
and upper_value
measures are optional,
and their calculation is benchmark harness specific.
In this example, the latency
Metric Kind object contains the following measures:
- A
value
of88.0
- A
lower_value
of87.42
- An
upper_value
of88.88
If the BMF JSON is stored in a file,
then you can use the bencher run
CLI subcommand with the optional --file
argument to specify that file path.
This works both with a benchmark command (ex: bencher run "bencher mock > results.json" --file results.json
)
and without a benchmark command (ex: bencher mock > results.json && bencher run --file results.json
).
🐰 Note: The
bencher mock
CLI subcommand generates mock BMF Metrics.
#️⃣ C#
The C# Adapter (c_sharp
) is a superset of c_sharp_dot_net
.
#️⃣ C# DotNet
The C# DotNet Adapter (c_sharp_dot_net
) expects BenchmarkDotNet output in JSON format (ie --exporters json
).
The latency
Metric Kind (ie nanoseconds (ns)
) is gathered.
There are two options for the Metric:
mean
(default): Thelower_value
andupper_value
are one standard deviation below and above the mean (ievalue
) respectively.median
: Thelower_value
andupper_value
are one interquartile range below and above the median (ievalue
) respectively.
This can be specified in the bencher run
CLI subcommand with the optional --average
flag.
➕ C++
The C++ Adapter (cpp
) is a superset of cpp_catch2
and cpp_google
.
➕ C++ Catch2
The C++ Catch2 Adapter (cpp_catch2
) expects Catch2 output.
The latency
Metric Kind (ie nanoseconds (ns)
) is gathered.
The lower_value
and upper_value
are one standard deviation below and above the mean (ie value
) respectively.
➕ C++ Google
The C++ Google Adapter (cpp_google
) expects Google Benchmark output in JSON format (ie --benchmark_format=json
).
The latency
Metric Kind (ie nanoseconds (ns)
) is gathered.
Only the mean (ie value
) is available. There are no lower_value
and upper_value
.
🕳 Go
The Go Adapter (go
) is a superset of go_bench
.
🕳 Go Bench
The Go Bench Adapter (go_bench
) expects go test -bench output.
The latency
Metric Kind (ie nanoseconds (ns)
) is gathered.
Only the mean (ie value
) is available. There are no lower_value
and upper_value
.
☕️ Java
The Java Adapter (java
) is a superset of java_jmh
.
☕️ Java JMH
The Java JMH Adapter (java_jmh
) expects Java Microbenchmark Harness (JMH) output in JSON format (ie -rf json
).
Both latency
and throughput
Metric Kinds (ie nanoseconds (ns)
and operations / second (ops/sec)
) may be gathered.
The lower_value
and upper_value
are the lower and upper confidence intervals for the mean (ie value
) respectively.
🕸 JavaScript
The JavaScript Adapter (js
) is a superset of js_benchmark
and js_time
.
🕸 JavaScript Benchmark
The JavaScript Benchmark Adapter (js_benchmark
) expects Benchmark.js output.
The throughput
Metric Kind (ie operations / second (ops/sec)
) is gathered.
The lower_value
and upper_value
are the relative margin of error below and above the median (ie value
) respectively.
🕸 JavaScript Time
The JavaScript Time Adapter (js_time
) expects console.time/console.timeEnd output.
The latency
Metric Kind (ie nanoseconds (ns)
) is gathered.
Only the operation time (ie value
) is available. There are no lower_value
and upper_value
.
🐍 Python
The Python Adapter (python
) is a superset of python_asv
and python_pytest
.
🐍 Python ASV
The Python ASV Adapter (python_asv
) expects airspeed velocity CLI asv run output.
The latency
Metric Kind (ie nanoseconds (ns)
) is gathered.
The lower_value
and upper_value
are the interquartile range below and above the median (ie value
) respectively.
🐍 Python Pytest
The Python Pytest Adapter (python_pytest
) expects pytest-benchmark output in JSON format (ie --benchmark-json results.json
).
This JSON output is saved to a file, so you must use the bencher run
CLI --file
argument to specify that file path (ie bencher run --file results.json "pipenv run pytest --benchmark-json results.json benchmarks.py"
).
The latency
Metric Kind (ie nanoseconds (ns)
) is gathered.
There are two options for the Metric:
mean
(default): Thelower_value
andupper_value
are one standard deviation below and above the mean (ievalue
) respectively.median
: Thelower_value
andupper_value
are one interquartile range below and above the median (ievalue
) respectively.
This can be specified in the bencher run
CLI subcommand with the optional --average
argument.
♦️ Ruby
The Ruby Adapter (ruby
) is a superset of ruby_benchmark
.
♦️ Ruby Benchmark
The Ruby Benchmark Adapter (ruby_benchmark
) expects Benchmark module output for the #bm
, #bmbm
, and #benchmark
methods.
A label is required for each benchmark.
The latency
Metric Kind (ie nanoseconds (ns)
) is gathered.
Only the reported value (ie value
) is available. There are no lower_value
and upper_value
.
🦀 Rust
The Rust Adapter (rust
) is a superset of rust_bench
and rust_criterion
.
🦀 Rust Bench
The Rust Bench Adapter (rust_bench
) expects libtest bench output.
The latency
Metric Kind (ie nanoseconds (ns)
) is gathered.
The lower_value
and upper_value
are the deviation below and above the median (ie value
) respectively.
🦀 Rust Criterion
The Rust Criterion Adapter (rust_criterion
) expects Criterion output.
The latency
Metric Kind (ie nanoseconds (ns)
) is gathered.
The lower_value
and upper_value
are the lower and upper bounds of either the slope (if available) or the mean (if not) (ie value
) respectively.
🦀 Rust Iai
The Rust Iai Adapter (rust_iai
) expects Iai output.
The instructions
, l1_access
, l2_access
, ram_access
, and estimated_cycles
Metric Kinds are gathered.
Only these measures (ie value
) are available. There are no lower_value
and upper_value
measures.
The Metric Kinds for this adapter are not created by default for all projects.
However, when you use this adapter, these Metric Kinds will be automatically created for your Project.
🐰 Congrats! You have learned all about benchmark harness adapters! 🎉