Replication package of the work ‟Towards effective assessment of steady state performance in Java software: Are we there yet? ”.
Requirements
- Python 3.6
- R 4.0
Use the following command to install Python dependencies
pip install --upgrade pip
pip install -r requirements.txt
The complete list of benchmarks considered in our empirical study is available in data/subjects.csv
.
Each row reports (for a particular benchmark): the benchmark signature (i.e., the name of the JMH benchmark method), the Java system, the parameterization used in our experiments, and the JMH configuration defined by software developers.
In particular, each JMH configuration is constituted by: warmup iteration time (w), measurement iteration time (r), warmup iterations (wi), measurement iterations (i), and forks (f).
We keep performance measurements used in our empirical study in a separate repository.
In order to replicate our results, the dataset (i.e., jmh.tar.gz
) must be downloaded from the repo, and unpacked in the data
folder as follows:
tar -zxvf path/jmh.tar.gz -C data
Each json file in the data/jmh
folder reports JMH results of an individual benchmark.
The json filename denotes the considered benchmark using the following format:
system#benchmark#parameterization.json
where system
denotes the Java system name, benchmark
denotes the benchmark method signature, and parameterization
specifies benchmark parameters values.
Steady state analysis can be performed through the following command:
python steadystate.py
Results of steady state analysis can be found in the data/classification
folder.
Each json file in the folder reports whether (and when) a given benchmark reaches a steady state of performance.
Specifically, the file reports the classification of each benchmark (steady state, no steady state or inconsistent) and fork (steady state or no steady state), and the JMH iterations in which the steady state is reached (-1 indicates no steady state).
The set of changepoints found by the PELT algorithm is reported in the data/changepoints
folder.
The following command maps the JMH configurations defined by software developers to our performance data.
In particular, it identifies for each fork the last warmup iteration and the last measurement iteration.
Results are stored in the data/devconfig
folder, and are later used to derive the estimated warmup time ( wt ) and the set of performance measurements considered by software developers ( M conf ).
python create_devconfigs.py
In order to run dynamic reconfiguration techniques, the replication package of Laaber et al. must be first downloaded and configured (following the instructions in the README.md
file). Then, the environment variable $REPLICATIONPACKAGE_PATH
must be set with the path of the replication package folder.
The following command performs JMH reconfiguration.
bash create_dynconfigs.sh
Similarly to developer configurations, the script identifies, for each fork, the last warmup iteration and the last measurement iteration.
Results are stored in the data/dynconfig
folder, and are later used to derive the estimated warmup time ( wt ) and the set of performance measurements identified by dynamic reconfiguration techniques ( M conf ).
The following command computes for each fork and developer/dynamic configuration the information needed to compute (i) the Warmup Estimation Error, (ii) the time waste and (iii) the Absolute Relative Performance Change.
Results are stored in data/cfg_assessment.csv
file.
python configuration_analysis.py
The following command generates Figures 4a and 4b, i.e., forks and benchmark classifications.
python analysis/rq1.py
The following command generates Figure 5, i.e., percentages of no steady state forks within each benchmark.
python analysis/rq1_within_benchmark.py
(In order to run both rq1.py
and rq1_within_benchmark.py
, you should first run steadystate.py
)
The following command generates tables and plots for steady state impact analysis (e.g., Figures 12, 13, 14 and 15).
python analysis/rq2.py
(Note that you should first run steadystate_impact_analysis.py
and fork_iteration_impact_analysis.py
)
The following command generates plots for developer configuration assessment and dynamic reconfiguration assessment.
python analysis/rq3_rq4.py
(Note that you should first run steadystate.py
, create_devconfigs.py
, bash create_dynconfigs.py
and configurations_assessment.py
)
The following command generates Figures 24, 25 and 26 ( i.e., summaries of WEE/wt/RPD comparisons), and Tables 4, 5 and 6 (i.e., detailed results of WEE/wt/RPD comparisons).
python analysis/rq5.py
(Note that you should first run steadystate.py
, create_devconfigs.py
, bash create_dynconfigs.py
and configurations_assessment.py
)