-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compare benchmarks #114
Compare benchmarks #114
Conversation
I am suprised that some of the benchmarks show a decrease in performance - am I reading this correctly? The reason I am surprised is that previous benchmarks did not show this Without Specifying Ranges
Specifying Ranges
Now that I have looked a bit more closely I can see we weren't testing e.g. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very good tool which prompted my question but that is no reason not to merge
The idea here is the following: in order to determine the performance impact of any change, the benchmark results need to be compared to some reference result.
bench.sh
just runs the benchmarks and puts the results in a CSV file named after the current branch and commit.compare.py
compares results with a reference file and outputs any significant differences.The workflow would be to run
bench.sh
oninterface-to-performance
once, then use the resulting CSV file as a reference to compare the benchmark results on feature branches against.Example output of
compare.py
: