( = Paper PDF,
= Presentation slides,
= Presentation video)
1.
Philipp Leitner; Cor-Paul Bezemer
An Exploratory Study of the State of Practice of Performance Testing in Java-based Open Source Projects Inproceedings
The International Conference on Performance Engineering (ICPE), pp. 373–384, ACM/SPEC, 2017.
Abstract | BibTeX | Tags: Empirical software engineering, Mining software repositories, Open source, Performance engineering, Performance testing
@inproceedings{leitner16oss,
title = {An Exploratory Study of the State of Practice of Performance Testing in Java-based Open Source Projects},
author = {Philipp Leitner and Cor-Paul Bezemer},
year = {2017},
date = {2017-04-22},
urldate = {2017-04-22},
booktitle = {The International Conference on Performance Engineering (ICPE)},
pages = {373--384},
publisher = {ACM/SPEC},
abstract = {The usage of open source (OS) software is nowadays widespread across many industries and domains. While the functional quality of OS projects is considered to be up to par with that of closed-source software, much is unknown about the quality in terms of non-functional attributes, such as
performance. One challenge for OS developers is that, unlike for functional testing, there is a lack of accepted best practices for performance testing.
To reveal the state of practice of performance testing in OS projects, we conduct an exploratory study on 111 Java-based OS projects from GitHub. We study the performance tests of these projects from five perspectives: (1) the developers, (2) size, (3) organization and (4) types of performance tests
and (5) the tooling used for performance testing.
First, in a quantitative study we show that writing performance tests is not a popular task in OS projects: performance tests form only a small portion of the test suite, are rarely updated, and are usually maintained by a small group of core project developers. Second, we show through a qualitative study that even though many projects are aware that they need performance tests, developers appear to struggle implementing them. We argue that future performance testing frameworks should provider better support for low-friction testing, for instance via non-parameterized methods
or performance test generation, as well as focus on a tight integration with standard continuous integration tooling.},
keywords = {Empirical software engineering, Mining software repositories, Open source, Performance engineering, Performance testing},
pubstate = {published},
tppubtype = {inproceedings}
}
The usage of open source (OS) software is nowadays widespread across many industries and domains. While the functional quality of OS projects is considered to be up to par with that of closed-source software, much is unknown about the quality in terms of non-functional attributes, such as
performance. One challenge for OS developers is that, unlike for functional testing, there is a lack of accepted best practices for performance testing.
To reveal the state of practice of performance testing in OS projects, we conduct an exploratory study on 111 Java-based OS projects from GitHub. We study the performance tests of these projects from five perspectives: (1) the developers, (2) size, (3) organization and (4) types of performance tests
and (5) the tooling used for performance testing.
First, in a quantitative study we show that writing performance tests is not a popular task in OS projects: performance tests form only a small portion of the test suite, are rarely updated, and are usually maintained by a small group of core project developers. Second, we show through a qualitative study that even though many projects are aware that they need performance tests, developers appear to struggle implementing them. We argue that future performance testing frameworks should provider better support for low-friction testing, for instance via non-parameterized methods
or performance test generation, as well as focus on a tight integration with standard continuous integration tooling.
performance. One challenge for OS developers is that, unlike for functional testing, there is a lack of accepted best practices for performance testing.
To reveal the state of practice of performance testing in OS projects, we conduct an exploratory study on 111 Java-based OS projects from GitHub. We study the performance tests of these projects from five perspectives: (1) the developers, (2) size, (3) organization and (4) types of performance tests
and (5) the tooling used for performance testing.
First, in a quantitative study we show that writing performance tests is not a popular task in OS projects: performance tests form only a small portion of the test suite, are rarely updated, and are usually maintained by a small group of core project developers. Second, we show through a qualitative study that even though many projects are aware that they need performance tests, developers appear to struggle implementing them. We argue that future performance testing frameworks should provider better support for low-friction testing, for instance via non-parameterized methods
or performance test generation, as well as focus on a tight integration with standard continuous integration tooling.