-
-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add benchmarks #12
Comments
I think a benchmarks folder is fine. There's not really a standard practice, except for the few packages that can use PkgBenchmark.jl. It might be easiest to just add some notebooks to DiffEqBenchmarks.jl. (Don't put notebooks in this repo though since then the git history stores every modification, and notebook diffs tend to be the whole thing, so this grows repos really fast.) |
This package might be the fastest And I like the the PS. your guys should write an julia> A = rand(1024, 1024); v = rand(1024);
julia> @benchmark Expokit.expmv(1.0, $A, $v)
BenchmarkTools.Trial:
memory estimate: 1.71 MiB
allocs estimate: 261
--------------
minimum time: 39.055 ms (0.00% GC)
median time: 45.978 ms (0.00% GC)
mean time: 46.702 ms (1.29% GC)
maximum time: 87.204 ms (49.07% GC)
--------------
samples: 108
evals/sample: 1
julia> @benchmark ExponentialUtilities.expv(1.0, $A, $v)
BenchmarkTools.Trial:
memory estimate: 469.39 KiB
allocs estimate: 1027
--------------
minimum time: 9.401 ms (0.00% GC)
median time: 11.130 ms (0.00% GC)
mean time: 11.297 ms (1.29% GC)
maximum time: 51.464 ms (80.93% GC)
--------------
samples: 443
evals/sample: 1 |
From slack:
The relevant PR is here: SciML/OrdinaryDiffEq.jl#372.
@ChrisRackauckas what's the standard practice of benchmarking a Julia repo? I'm thinking about adding a benchmark script (notebook?) to
test
and skip it in regular unit tests. The results will then be recorded in a separate markdown file.The text was updated successfully, but these errors were encountered: