( = Paper PDF, = Presentation slides, = Presentation video)
1.
Diego Costa; Cor-Paul Bezemer; Philipp Leitner; Artur Andrzejak
What's Wrong With My Benchmark Results? Studying Bad Practices in JMH Benchmarks Journal Article
The Transactions of Software Engineering Journal (TSE), 2019.
Abstract | BibTeX | Tags: Bad practices, JMH, Microbenchmarking, Performance testing, Static analysis
@article{diego_tse,
title = {What's Wrong With My Benchmark Results? Studying Bad Practices in JMH Benchmarks},
author = {Diego Costa and Cor-Paul Bezemer and Philipp Leitner and Artur Andrzejak},
year = {2019},
date = {2019-06-17},
urldate = {2019-06-17},
journal = {The Transactions of Software Engineering Journal (TSE)},
publisher = {IEEE},
abstract = {Microbenchmarking frameworks, such as Java’s Microbenchmark Harness (JMH), allow developers to write fine-grained performance test suites at the method or statement level. However, due to the complexities of the Java Virtual Machine, developers often struggle with writing expressive JMH benchmarks which accurately represent the performance of such methods or statements. In this paper, we empirically study bad practices of JMH benchmarks. We present a tool that leverages static analysis to identify 5 bad JMH practices. Our empirical study of 123 open source Java-based systems shows that each of these 5 bad practices are prevalent in open source software. Further, we conduct several experiments to quantify the impact of each bad practice in multiple case studies, and find that bad practices often significantly impact the benchmark results. To validate our experimental results, we constructed seven patches that fix the identified bad practices for six of the studied open source projects, of which six were merged into the main branch of the project. In this paper, we show that developers struggle with accurate Java microbenchmarking, and provide several recommendations to developers of microbenchmarking frameworks on how to improve future versions of their framework.},
keywords = {Bad practices, JMH, Microbenchmarking, Performance testing, Static analysis},
pubstate = {published},
tppubtype = {article}
}
Microbenchmarking frameworks, such as Java’s Microbenchmark Harness (JMH), allow developers to write fine-grained performance test suites at the method or statement level. However, due to the complexities of the Java Virtual Machine, developers often struggle with writing expressive JMH benchmarks which accurately represent the performance of such methods or statements. In this paper, we empirically study bad practices of JMH benchmarks. We present a tool that leverages static analysis to identify 5 bad JMH practices. Our empirical study of 123 open source Java-based systems shows that each of these 5 bad practices are prevalent in open source software. Further, we conduct several experiments to quantify the impact of each bad practice in multiple case studies, and find that bad practices often significantly impact the benchmark results. To validate our experimental results, we constructed seven patches that fix the identified bad practices for six of the studied open source projects, of which six were merged into the main branch of the project. In this paper, we show that developers struggle with accurate Java microbenchmarking, and provide several recommendations to developers of microbenchmarking frameworks on how to improve future versions of their framework.