Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add custom CSV and Markdown reporters for benchmark results #109

Merged
merged 6 commits into from
Apr 21, 2022

Conversation

psalz
Copy link
Member

@psalz psalz commented Mar 24, 2022

This adds a custom CSV reporter for producing machine-readable benchmarking results. You can see an example output in the updated results for gpuc1 (I noticed some of the numbers diverge quite wildly from the previous results, not sure how reliable those are - cc @PeterTh).

To use the reporter, simply call the benchmarks with --reporter celerity-benchmark-csv.

@psalz psalz requested review from PeterTh and fknorr March 25, 2022 09:35
@psalz psalz self-assigned this Mar 25, 2022
Copy link
Contributor

@fknorr fknorr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the output has is for machine use anyway, IMO we should just export individual samples and leave all the statistics to the postprocessing script (e.g. scipy + matplotlib). That would be strictly more general.

test/benchmark_csv_reporter.cc Outdated Show resolved Hide resolved
test/benchmark_csv_reporter.cc Outdated Show resolved Hide resolved
@psalz psalz force-pushed the benchmark-csv-reporter branch from 15c323d to 4d66f96 Compare March 29, 2022 14:03
@psalz psalz changed the title Add custom CSV reporter for benchmark results Add custom CSV and Markdown reporters for benchmark results Mar 29, 2022
@psalz
Copy link
Member Author

psalz commented Mar 29, 2022

I've changed the CSV reporter to also include the raw samples. I've also introduced a new Markdown reporter that can be used to produce human-readable results alongside the CSV. Simply run with ./test/benchmarks --reporter celerity-benchmark-csv::results.csv --reporter celerity-benchmark-md::results.md to generate both simultaneously.

@fknorr
Copy link
Contributor

fknorr commented Mar 30, 2022

Thanks! Most of this looks great, however for the Markdown output, we technically need to escape all the special characters so test / benchmark names aren't interpreted as formatting. For example, the CDAG benchmarks have an underscore as in wave_sim topology.

This is less about nitpicking and more towards the question whether we actually need proper Markdown output, or if a .txt file with an ASCII table (which essentially looks the same source-wise) would be enough. Viewing the Markdown file "raw" makes it much easier to parse for my eyes, since the numbers are properly aligned and spacing is not as wasteful (on GitHub's renderer).

Edit: I just realized GitHub has a (searchable) CSV renderer, which is kind of neat, but it unfortunately doesn't right-align numbers.

@PeterTh
Copy link
Contributor

PeterTh commented Apr 1, 2022

It looks like adding and removing dependencies and checking dependencies got much slower compared to the currently-checked-in results for the low-dep-count tests. It's still very fast, but don't really have an explanation for that.

@psalz
Copy link
Member Author

psalz commented Apr 5, 2022

This is less about nitpicking and more towards the question whether we actually need proper Markdown output, or if a .txt file with an ASCII table (which essentially looks the same source-wise) would be enough. Viewing the Markdown file "raw" makes it much easier to parse for my eyes, since the numbers are properly aligned and spacing is not as wasteful (on GitHub's renderer).

As discussed during yesterday's meeting, there is value in GitHub's Markdown diff, so we'll keep it. I've added rudimentary escaping.

Copy link
Contributor

@fknorr fknorr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The markdown report in ci/perf/gpuc1_bench.md has not been updated since escaping was added. Otherwise LGTM!

psalz added 6 commits April 21, 2022 10:09
- Include raw samples in output
- Don't print part number for distinguishing generator values, instead
  detect uses of GENERATE and print a warning.
This generates markdown benchmark reports intended for human
consumption.

Run benchmarks with
    --reporter celerity-benchmark-csv::results.csv \
    --reporter celerity-benchmark-md::results.md
to generate both CSV and Markdown reports simultaneously.
@psalz psalz force-pushed the benchmark-csv-reporter branch from 55a176a to 7c5e82a Compare April 21, 2022 08:10
@psalz
Copy link
Member Author

psalz commented Apr 21, 2022

The markdown report in ci/perf/gpuc1_bench.md has not been updated since escaping was added. Otherwise LGTM!

I've manually added the escaping to the relevant benchmark names; rebased onto master.

@psalz psalz merged commit ba3af8b into master Apr 21, 2022
@psalz psalz deleted the benchmark-csv-reporter branch April 21, 2022 12:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants