Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Microbenchmarks - work in progress #62

Closed
wants to merge 1 commit into from
Closed

Microbenchmarks - work in progress #62

wants to merge 1 commit into from

Conversation

PeterTh
Copy link
Contributor

@PeterTh PeterTh commented Dec 2, 2021

Work in progress open for discussion.

I currently vendored the args library directly into the benchmarks folder, which is probably not what we want. I do however expect these benchmarks to potentially have a lot of arguments, and I don't want to parse them ad hoc. I also saw that we already do just that in several of the examples, but it might be a bit tricky to re-use a library across both that and the benchmarks when both are designed to be able to be built either independently or as part of the overall tree. I'm open for suggestions on how to best do this.

@psalz
Copy link
Member

psalz commented Dec 3, 2021

Catch2 ships with Clara, a small command line parsing library. Although it seems the standalone project has been discontinued, it is still alive and well as part of the Catch2 distribution. Could we use that instead of args.hxx?

@PeterTh PeterTh force-pushed the benchmarks branch 2 times, most recently from 3d57601 to a23b776 Compare December 9, 2021 18:52
@PeterTh
Copy link
Contributor Author

PeterTh commented Dec 9, 2021

After some offline discussion and trying out the different options, I now decided to include args as a submodule, similar to spdlog and catch2. I think this is unproblematic, since it's a very small repo, comes with its own CMake file, and is header-only. I disabled the options which build its tests and example when it is included in the Celerity context.

More importantly, the tasks microbenchmark now does slightly more interesting stuff (build a tree and an upside-down tree), but I get some strange behaviour in terms of generated dependencies which I still need to look into.

@PeterTh
Copy link
Contributor Author

PeterTh commented Mar 14, 2022

Investigate re-using graph layout generators between isolated unit test microbenchmarks and this PR.

@fknorr
Copy link
Contributor

fknorr commented Mar 27, 2022

In this distributed setting, would be interesting to benchmark common communication patterns (gather / scatter / alltoall) as well as scalar reductions from host buffers against the optimized MPI collectives.

@PeterTh
Copy link
Contributor Author

PeterTh commented Jun 13, 2023

We do have some microbenchmarks by now; will revisit this in the future.

@PeterTh PeterTh closed this Jun 13, 2023
@PeterTh PeterTh deleted the benchmarks branch January 31, 2024 10:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants