Prior Art
Benchmark Tracking Tools
- 
General
- Bencher - Continuous Benchmarking: Catch performance regressions in CI
 - benchmark-action/github-action-benchmark - GitHub Action for continuous benchmarking to keep performance
 - conbench/conbench - Language-independent Continuous Benchmarking (CB) Framework
 - Hyperfoil/Horreum - Benchmark results repository service
 - It4innovations/snailwatch - Continuous performance monitoring service
 - nyrkio/nyrkio - Nyrkiö is an open source platform for detecting performance changes
 - seriesci - Track any value in CI: bundle size, build time, lines of code, number of dependencies, benchmarks, and much more
 - smarr/ReBench - Execute and document benchmarks reproducibly
 
 - 
Benchmark Harness Specific
- airspeed-velocity/asv - Airspeed Velocity: A simple Python benchmarking tool with web-based reporting
 - bamlab/flashlight - Flashlight is a Lighthouse-like tool for mobile apps. No installation required.
 - benchhub/benchhub - A service for running database benchmarks and saving the result
 - bheisler/criterion.rs - Statistics-driven benchmarking library for Rust
 - blacha/hyperfine-action - Run hyperfine on every commit to monitor performance regressions
 - boa-dev/criterion-compare-action - Compare the performance of Rust project branches
 - bobheadxi/gobenchdata - Run Go benchmarks, publish results to an interactive web app, and check for performance regressions in your pull requests
 - callstack/reassure - Performance testing companion for React and React Native
 - codingberg/benchgraph - Visualization of Golang benchmark output using Google charts
 - CodSpeed - CodSpeed provides integrated CI tools for software engineering teams to anticipate the impacts of the next delivery on system performance.
 - Cybench - Continuous performance regression testing for Java CI/CD pipelines
 - dandavison/chronologer - Visualize performance benchmarks over git commit history
 - dapplion/benchmark - JS/TS benchmarking solution to track performance regressions in CI
 - distributed-system-analysis/pbench - A benchmarking and performance analysis framework
 - Go perf - Go benchmark analysis tools
 - icebob/bench-bot - Benchmark runner robot. Continuous benchmarking for benchmarkify the benchmark framework for NodeJS
 - jsperf/jsperf.com - jsPerf aims to provide an easy way to create and share test cases, comparing the performance of different JavaScript snippets by running benchmarks.
 - jumaffre/cimetrics - Track your metrics in GitHub PR to avoid unwanted regressions
 - knqyf263/cob - Continuous Benchmark for Go Project
 - martincostello/benchmarkdotnet-results-publisher - BenchmarkDotNet Results Publisher
 - moditect/jfrunit - A JUnit extension for asserting JDK Flight Recorder events
 - NimbleDroid - Functional Performance Testing for Android & iOS
 - novadiscovery/benchgraph - A lightewight tool for visualizing your benchmarks history
 - OctoPerf - Simplify your load testing experience
 - ocurrent/current-bench - Experimental continuous benchmarking infrastructure using OCurrent pipelines
 - OpenBenchmarking.org - Storage of Phoronix Test Suite benchmark result data (including optional system logs, etc)
 - OpenEBench - OpenEBench is the ELIXIR gateway to benchmarking communities, software monitoring, and quality metrics for life sciences tools and workflows.
 - Orijtech bencher - Continuous benchmarking for the Go programming language
 - Perfbench - Perfbench is an interactive online C++ code profiling tool.
 - Performance Analysis - Diagnose & prevent performance regressions
 - python/codespeed - A fork of Codespeed that includes the instances run at https://speed.python.org/ and https://speed.pypy.org
 - tobami/codespeed - A web application to monitor and analyze the performance of your code
 - trytouca/trytouca - Continuous Regression Testing for Engineering Teams
 - Unity-Technologies/PerformanceBenchmarkReporter - Establish benchmark samples and measurements using the Performance Testing package, then use these benchmark values to compare subsequent performance test results in an html output utilizing graphical visualizations
 
 - 
Web Specific
- Contentsquare - Speed Analysis is not only about Synthetic Monitoring but also offers powerful Real User Monitoring capabilities, a great fit for brands leading the way on customer experience across the world.
 - Iron/Out - Faster websites equals better business results: improve user experience, increase conversion rate and page experience ranking, lower the bounce rate
 - MeasureWorks - Track user behavior with End-2-End observability to continuously optimize your online performance
 - SpeedCurve - See how people experience the speed of your website, then identify and fix performance issues
 - ubenchan/frontend - Beautiful browser benchmarks
 - WebPageTest - Run a free website speed test from around the globe using real browsers at consumer connection speeds with detailed optimization recommendations
 
 - 
Project Specific
- apache/arrow-datafusion - A 
/benchmarkGitHub command to compare benchmark between base and PR commits - aspnet/Benchmarks - Benchmarks for ASP.NET Core (dashboard)
 - BrowserBench - Speedometer is a browser benchmark that measures the responsiveness of web applications
 - corecheck/corecheck - Test coverage and more for Bitcoin Core including continuous benchmarking
 - deno.land benchmarks - As part of Deno’s continuous integration and testing pipeline we measure the performance of certain key metrics of the runtime. You can view these benchmarks here.
 - diesel-rs/metrics - The numbers collected by diesel continuous scheduled benchmark actions to track changes over time
 - Elasticsearch Benchmarks - The results of the Elasticsearch nightly benchmarks based on the main branch as of that point in time
 - Feldera Benchmarks - Benchmarks for Feldera
 - golang/benchmarks - Benchmarks for the Go perf dashboard
 - Google Chrome V8 - CSuite: Local benchmarking for V8 performance analysis
 - lampepfl/bench-data - Continuous benchmarking data for Dotty Benchmarks
 - Lucene Nightly Benchmarks - Each night, an automated Python tool checks out the Lucene/Solr trunk source code and runs multiple benchmarks
 - martincostello/benchmarks - Repository for storing benchmark results (dashboard)
 - Myoldmopar/EnergyPlusBuildResults - Build, test, and performance results dashboard for EnergyPlus
 - OpenTelemetry Benchmarks - The OpenTelemetry Collector runs load tests on every commit to the 
opentelemetry-collector-contribrepository - parse-community/benchmark - Parse Server Continuous Benchmark
 - python/pyperformance - Python Performance Benchmark Suite
 - pytorch/benchmark - TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance.
 - PyPy Speed Center - A fork of Codespeed of PyPy
 - Python Speed Center - A fork of Codespeed for Python
 - rust-lang/rustc-perf - Website for graphing performance of rustc
 - Shopify/yjit-bench - Set of benchmarks for the YJIT CRuby JIT compiler and other Ruby implementations seen at speed.yjit.org
 - Skia Perf - Skia Perf is a web application for analyzing and viewing performance metrics produced by Skia’s testing infrastructure
 - Supabench - App by Supabase team to run benchmarks automatically in CI
 - Vitess Are We Fast Yet - Automated benchmarking system that tests Vitess’s performance on a nightly basis
 - rustls/rustls-bench-app - Continuous benchmarking for the Rustls project
 - Zen Browser Benchmarks - Performance Benchmarks for the Zen Browser
 - ziglang/gotta-go-fast - Performance Tracking for Zig
 
 - apache/arrow-datafusion - A 
 
🐰 A special thank you to Continuous Benchmark! Some of their test files are used for our Benchmark Harness Adapters.
Benchmark Tracking Posts
- 
Microsoft
 - 
Facebook
 - 
Apple
 - 
Amazon
 - 
Netflix
 - 
Google
 - 
Dropbox
 - 
Elastic
 - 
MongoDB
- Reducing Variability in Performance Tests on EC2: Setup and Key Results
 - Using Change Point Detection to Find Performance Regressions
 - Creating a virtuous cycle in performance testing
 - The Use of Change Point Detection to Identify Software Performance Regressions in a Continuous Integration System (video)
 - Automated system performance testing at MongoDB
 - Creating a Virtuous Cycle in Performance Testing at MongoDB (video)
 - Automated Triage of Performance Change Points Using Time Series Analysis and Machine Learning: Data Challenge Paper
 - Characterizing and Triaging Change Points
 - Beware of the Interactions of Variability Layers When Reasoning about Evolution of MongoDB
 - Change Point Detection for MongoDB Time Series Performance Regression
 - Performance Testing at MongoDB
 
 - 
Academic
- Producing Wrong Data Without Doing Anything Obviously Wrong!
 - Locating Performance Regression Root Causes in the Field Operations of Web-Based Systems: An Experience Report
 - Stabilizer: Statistically Sound Performance Evaluation
 - A Nonparametric Approach for Multiple Change Point Analysis of Multivariate Data
 - Automated Detection of Performance Regressions Using Regression Models on Clustered Performance Counters
 - Robust benchmarking in noisy environments
 - Virtual Machine Warmup Blows Hot and Cold
 - BenchHub: store database benchmark result in database
 - Continuous Benchmarking: Using System Benchmarking in Build Pipelines
 - Towards Continuous Benchmarking: An Automated Performance Evaluation Framework for High Performance Software
 - Duet Benchmarking: Improving Measurement Accuracy in the Cloud
 - Search-based detection of code changes introducing performance regression
 - Hunter: Using Change Point Detection to Hunt for Performance Regressions
 - ElastiBench: Scalable Continuous Benchmarking on Cloud FaaS Platforms
 - Increasing Efficiency and Result Reliability of Continuous Benchmarking for FaaS Applications
 
 - 
Others
- Accurate and efficient software microbenchmarks
 - Are Benchmarks From Cloud CI Services Reliable?
 - Are your memory-bound benchmarking timings normally distributed?
 - Automated performance regression testing with Reassure
 - automatically prevent performance regressions
 - Automating Speed: A Proven Approach to Preventing Performance Regressions in Kafka Streams
 - Autonomously Finding Performance Regressions In The Linux Kernel
 - Benchmarking C++ Code at CppCon 2015
 - Benefits of a benchmarking step in your CI/CD pipeline
 - Building an Open Source, Continuous Benchmark System
 - CI for performance: Reliable benchmarking in noisy environments
 - Compare and optimize your code with Datadog Profile Comparison
 - Continuous Benchmarking for OCaml Projects
 - Continuous benchmarking for rustls
 - Continuous benchmarking with Go and GitHub Actions
 - Continuous Benchmarks on a Budget
 - Continuous Performance Regression Testing for CI/CD
 - Created GitHub Action for continuous benchmarking
 - Demanding the impossible rigorous database benchmarking
 - Exploring the Rust compiler benchmark suite
 - Get a performance score for your app
 - Hardware performance counter support (via 
rdpmc) - Is GitHub Actions suitable for running benchmarks?
 - Lighthouse for mobile apps
 - Measuring and Improving React Native Performance
 - Microbenchmarking calls for idealized conditions
 - Minimum Times Tend to Mislead When Benchmarking
 - Paired benchmarking. How to measure performance
 - Performance engineering requires stable benchmarks
 - Performance in Continuous Integration
 - Performance Regression Testing
 - Performance Benchmarking as Part of your CI/CD Pipeline
 - Performance testing in CI: Let’s break the build!
 - Performance Testing in the CI/CD Pipeline
 - Performance-Regression Pitfalls Every Project Should Avoid
 - Regression Testing of Performance - SmartBear TestComplete
 - Rust Performance Testing on Travis CI
 - Storing Continuous Benchmarking Data in Prometheus
 - The mean misleads: why the minimum is the true measure of a function’s run time
 - Towards Continuous Performance Regression Testing
 
 
Benchmark Comparisons
- BurntSushi/rebar - A biased barometer for gauging the relative speed of some regex engines on a curated set of tasks.
 - CH-benCHmark - Operational and real-time Business Intelligence (BI) mixed workload SQL benchmarks
 - ClickBench - A Benchmark For Analytical DBMS
 - denosaurs/bench - Comparing deno & node HTTP frameworks
 - diesel-rs/diesel_bench - A benchmark suite for relational database connection crates in Rust
 - HewlettPackard/netperf - Netperf is a benchmark that can be used to measure the performance of many different types of networking. It provides tests for both unidirectional throughput, and end-to-end latency
 - krausest/js-framework-benchmark - A comparison of the performance of a few popular javascript frameworks
 - parttimenerd/temci - An advanced benchmarking tool
 - PerfKit Benchmarker (PKB) - A set of benchmarks to measure and compare cloud offerings. The benchmarks use default settings to reflect what most users will see.
 - Programming Language Benchmarks - Yet another implementation of computer language benchmarks game
 - rosetta-rs/argparse-rosetta-rs - Comparison of Rust argparse APIs
 - rosetta-rs/hashing-rosetta-rs - Comparison of Rust hash comparison vs direct string comparison
 - rosetta-rs/md-rosetta-rs - Comparison of Rust Markdown APIs
 - rosetta-rs/parse-rosetta-rs - Comparison of Rust parser APIs
 - rosetta-rs/string-rosetta-rs - Comparison of Rust string types
 - rosetta-rs/template-benchmarks-rs - Comparison of Rust template engines
 - Standard Performance Evaluation Corporation (SPEC) - A non-profit consortium that establishes and maintains standardized benchmarks and performance evalutation tools for new generations of computing systems
 - TechEmpower/FrameworkBenchmarks - In the following tests, we have measured the performance of several web application platforms, full-stack frameworks, and micro-frameworks (collectively, “frameworks”).
 - timescale/tsbs - Time Series Benchmark Suite, a tool for comparing and evaluating databases for time series data
 - The Computer Language Benchmarks Game - Which programming language is fastest?
 - TPC - Benchmarking and load testing for the worlds most popular databases supporting Oracle Database, Microsoft SQL Server, IBM Db2, PostgreSQL, MySQL and MariaDB
 - Top500 - Ranks and details the 500 most powerful non-distributed computer systems in the world
 
Benchmark Harnesses
- 
Android
- Macrobenchmark - Writing a Macrobenchmark
 - Microbenchmark - Microbenchmark Overview
 
 - 
C
- akopytov/sysbench - Scriptable database and system performance benchmark
 - gormanm/mmtests - Benchmarking framework primarily aimed at Linux kernel testing
 - Linux kernel perf bench - This command includes a number of multi-threaded microbenchmarks to exercise different subsystems in the Linux kernel and system calls.
 - LinuxPerfStudy/LEBench - An analysis of performance evolution of Linux’s core operations
 - Phoronix Test Suite - Open-Source, Automated Benchmarking
 - RRZE-HPC/likwid - Performance monitoring and benchmarking suite
 
 - 
C++
- catchorg/Catch2 - A modern, C++-native, test framework for unit-tests, TDD, and BDD - using C++14, C++17, and later
 - DigitalInBlue/Celero - C++ Benchmark Authoring Library/Framework
 - facebook/folly/Benchmark.h - Provides a simple framework for writing and executing benchmarks
 - google/benchmark - A microbenchmark support library for C++
 - iboB/picobench - A micro microbenchmarking library for C++11 in a single header file
 - ivafanas/sltbench - C++ benchmark tool. Practical, stable and fast performance testing framework.
 - libnonius/nonius - A C++ micro-benchmarking framework
 
 - 
C#
- donet/BenchmarkDotNet - Powerful .NET library for benchmarking
 - Unity-Technologies/PerformanceBenchmarkReporter - Establish benchmark samples and measurements using the Performance Testing package, then use these benchmark values to compare subsequent performance test results in an html output utilizing graphical visualizations
 - xunit/xunit - xUnit.net is a free, open source, community-focused unit testing tool for .NET
 
 - 
Elixir
- alco/benchfella - Microbenchmarking tool for Elixir
 - bencheeorg/benchee - Easy and extensible benchmarking in Elixir providing you with lots of statistics!
 
 - 
Functions as a Service (FaaS)
- vhive-serverless/STeLLAR - STeLLAR: Open-source framework for serverless clouds benchmarking
 
 - 
Go
- Benchmarks - Go test benchmarks
 
 - 
Haskell
- haskell/criterion - A powerful but simple library for measuring the performance of Haskell code
 
 - 
Java
- moditect/jfrunit - A JUnit extension for asserting JDK Flight Recorder events
 - openjdk/jmh - JMH is a Java harness for building, running, and analyzing nano/micro/milli/macro benchmarks written in Java and other languages targeting the JVM
 
 - 
JavaScript
- bestiejs/benchmark.js - A benchmarking library. As used on jsPerf.com
 - callstack/reassure - Performance testing companion for React and React Native
 - console.time/console.timeEnd - A method to start/stop a timer you can use to track how long an operation takes
 - deno bench - Deno has a built-in benchmark runner that you can use for checking performance of JavaScript or TypeScript code.
 - evanwashere/mitata - cross-runtime benchmarking lib and cli
 - Node.js Performance Measurement API - This module provides an implementation of a subset of the W3C Web Performance APIs as well as additional APIs for Node.js-specific performance measurements.
 - RafaelGSS/bench-node - The 
bench-nodemodule gives the ability to measure performance of JavaScript code. - ShogunPanda/cronometro - Simple benchmarking suite powered by HDR histograms.
 - tinylibs/tinybench - A simple, tiny and lightweight benchmarking library!
 - v8/web-tooling-benchmark - JavaScript benchmark for common web developer workloads
 - vitest bench - 
vitest benchuses tinybench library under the hood - yamiteru/isitfast - A modular benchmarking library with V8 warmup and cpu/ram denoising for the most accurate and consistent results.
 
 - 
Julia
- JuliaCI/BenchmarkTools.jl - A benchmarking framework for the Julia language
 
 - 
Python
- airspeed-velocity/asv - Airspeed Velocity: A simple Python benchmarking tool with web-based reporting
 - ionelmc/pytest-benchmark - py.test fixture for benchmarking code
 - timeit - This module provides a simple way to time small bits of Python code
 
 - 
Ruby
- ruby/benchmark - Methods for benchmarking Ruby code, giving detailed reports on the time taken for each task
 
 - 
Rust
- bazhenov/tango - Rust pairwise microbenchmarking harness
 - bheisler/criterion.rs - Statistics-driven benchmarking library for Rust
 - bheisler/iai - Experimental one-shot benchmarking/profiling harness for Rust
 - bluss/bencher - bencher is just a port of the libtest (unstable) benchmark runner to Rust stable releases. 
cargo benchon stable. “Not a better bencher!” = No feature development. Go build a better stable benchmarking library. - BurntSushi/cargo-benchcmp - A small utility to compare Rust micro-benchmarks
 - iai-callgrind/iai-callgrind - High-precision and consistent benchmarking framework/harness for Rust
 - jbreitbart/criterion-perf-events - A plugin for Criterion.rs to measure Linux perf events
 - libtest bench - The libtest harness supports running benchmarks for functions annotated with the 
#[bench]attribute. Benchmarks are currently unstable, and only available on the nightly channel. - ThijsRay/coppers - Coppers is a custom test harnass for Rust that measures the energy usage of your test suite.
 - nvzqz/divan - Comfy benchmarking for Rust projects
 - sarah-ek/diol - diol is a benchmarking library for Rust.
 
 - 
Shell
- Gabriella439/bench - Command-line benchmark tool
 - sharkdp/hyperfine - A command-line benchmarking tool
 
 - 
SQL
- ClickHouse/ClickBench - ClickBench: a Benchmark For Analytical Databases
 - TPC-Council/HammerDB - HammerDB Database Load Testing and Benchmarking Tool
 
 - 
Swift
- apple/swift - The Swift Benchmark Suite
 - google/swift-benchmark - A swift library to benchmark code snippets