Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: Benchmarking without a reference? #5

Open
bluenote10 opened this issue Oct 6, 2023 · 4 comments
Open

Feature: Benchmarking without a reference? #5

bluenote10 opened this issue Oct 6, 2023 · 4 comments

Comments

@bluenote10
Copy link

Reading the documentation made me wonder why the __benchmarks__ entries are tuples with two functions a and b. Apparently this assumes that a benchmark always has to be performed against some kind of reference?

I'm wondering if the framework can be easily adapted to use cases where one doesn't have such a reference function. Say, I just to benchmark sort_three without the kind of arbitrary sort_seven reference.

Is this already supported somehow?

@tonybaloney
Copy link
Owner

no that's not currently supported.

Just so I understand, would you run all the benchmarks with no comparison function? You're only interested in min, max and mean?

I could change the code to look at the length of the tuple and not run a comparator test if there isn't a second benchmark defined. It would get tricky if some did and some didn't (with the table output)

@bluenote10
Copy link
Author

Just so I understand, would you run all the benchmarks with no comparison function? You're only interested in min, max and mean?

Yes that would already suffice (perhaps trimmed mean / median).

What I had in mind is a workflow as offered by frameworks like e.g. criterion.rs in Rust. By default they just measure the test cases as defined. Internally they also offer comparison to the last execution or also explicitly saved snapshots. This can be helpful to e.g. benchmark before/after an optimization or refactoring for comparison. Or to track the longer term performance development of a project to see how things evolve over time when e.g. dependencies change (by committing these snapshot files into git).

This internal diffing isn't super important though, because a lot can already be achieved by doing manual before/after or snapshot runs. Thanks to the beautiful output in the form of a table, once could for instance simply take the output and append it to some benchmarks_log.txt committed to git, which would also nicely document the performance evolvement of a project.

@tonybaloney
Copy link
Owner

ASV (airspeed velocity) might suit your requirements, it has a database, web UI and a ton of other tools. Some big data science libraries (iirc Pandas inc) use it.

https://asv.readthedocs.io/en/stable/using.html

@bluenote10
Copy link
Author

Yes indeed, ASV is pretty nice! In certain use cases something more lightweight like rich-bench is a nice alternative though due to its simplicity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants