How to Track Custom Benchmarks with Bencher
Bencher supports the most popular benchmarking harnesses out-of-the-box, and we are always open to suggestions for new adapters. However there can be situations where an off-the-shelf benchmarking harness doesn’t fit your needs, necessitating the creation of a custom benchmarking harness. Lucky for you, Bencher also supports using a custom benchmarking harness. The easiest way to integrate your custom benchmark harness with Bencher is to output Bencher Metric Format (BMF) JSON.
This is an example of BMF JSON:
In this example, the key benchmark_name
would be the name of a Benchmark.
Benchmark names can be any non-empty string up to 1024 characters.
The benchmark_name
object can contain multiple Measure names, slugs, or UUIDs as keys.
If the value specified is a name or slug and the Measure does not already exist, it will be created for you.
However, if the value specified is a UUID then the Measure must already exist.
In this example, latency
is the slug for the built-in Latency Measure.
Each Project by default has a Latency (ie latency
) and Throughput (ie throughput
) Measure,
which are measured in nanosecond (ns)
and operations / second (ops/s)
respectively.
The Measure object contains a Metric with up to three values:
value
, lower_value
, and upper_value
.
The lower_value
and upper_value
values are optional.
In this example, the latency
Measure object contains the following values:
- A
value
of88.0
- A
lower_value
of87.42
- An
upper_value
of88.88
You can use the bencher mock
CLI subcommand to generate mock BMF data.
We will use it as a placeholder for your own custom benchmark runner.
Using bencher run
and the json
adapter
we can track our benchmarks with the following command:
If your results were instead stored in a file named results.json
,
then you could use the --file
option to specify the file path.
This works both with a benchmark command and without one.
With a benchmark command:
Without a benchmark command:
Multiple Measures
In Bencher Metric Format (BMF) JSON
the Benchmark object can contain multiple Measure names, slugs, or UUIDs as keys.
If the value specified is a name or slug and the Measure does not already exist, it will be created for you.
However, if the value specified is a UUID then the Measure must already exist.
Each Measure object must contain a Metric with up to three values:
value
, lower_value
, and upper_value
.
The lower_value
and upper_value
values are optional.
This is an example of BMF JSON with multiple Measures:
In this example, the latency
Measure object contains the following values:
- A
value
of88.0
- A
lower_value
of87.42
- An
upper_value
of88.88
And the throughput
Measure object contains the following values:
- A
value
of5.55
- A
lower_value
of3.14
- An
upper_value
of6.30
You can use the bencher mock
CLI subcommand
with the --measure
option
to generate mock BMF data with multiple Measures.
We will use it as a placeholder for your own custom benchmark runner.
Using bencher run
and the json
adapter
we can track our benchmarks with multiple Measures with the following command:
🐰 Congrats! You have learned how to track custom benchmarks! 🎉