An opinionated Apache Benchmark (ab
) runner and result plotter.
Running ab
once can give unreliable results. Maybe your server is doing other work or your machine running the test is busy. To circumvent that issue, this script can take multiple measurements with a wait time in between.
Requirements:
- A unix terminal (tested on macOS)
- Apache Benchmark (ab)
- Gnuplot
- NodeJS (>= 14.6)
To install:
- Clone this repository.
- Run
npm install
./abrunner.js measure --help
./abrunner.js compare --help
A typical run can be started like this:
./abrunner.js measure -u https://localhost.test/ -o results/foo
This used the default settings. It will run 10 measurements of 500 requests with 10 concurrency. Between each measurement it will wait for 5 minutes. The results will be stored in ./results/foo
.
For more advanced options, read the advanced measure docs.
Running this command will create a bunch of outputs:
iteration*.dat
files contain theab
raw measurementsiteration*.out
files contain theab
output (that is normally outputted in the terminal)combined.dat
contains all combined measurementscombined.stats
contain some statistics collected from the combined measurementsmeasure.png
contains a plot with which you can visually inspect the response times of the individual runs and everything combinedmeasure.p
is the Gnuplot script used to create above plot
Compare any number of measurements you took before. The result will be a combined boxplot.
For each measurement you want to incorporate into the comparison, provide the combined.dat
datafile (or another ab
gnuplot output file) and an appropriate label.
./abrunner.js compare -i results/foo/combined.dat results/bar/combined.dat -l "Foo" "Bar" -o results/comparison
Running this command will create a bunch of outputs:
run*.dat
files are a copy of the input filesrun*.stats
files contain some statistics collected from the input filecompare.png
contains a plot comparing all input filescompare.p
is the Gnuplot script used to create above plot
The comparison plot will look something like this:
When running a new measurement, the iteration*.out
file captures the ab
output. Sometimes this contains an error that helps you pinpoint the problem.
If you run ab
against a domain, make sure it ends with a /
. If the url includes a path this should not be a problem.
The output.log
file always contains always contains a list of the command input and all commands that were run. If somehow the input arguments are not parsed correctly, you should be able to spot that here.