-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Faster and more general reductions for sparse matrices #10536
Conversation
5489fb8
to
ff3723d
Compare
Hah, thats a 1520x speed up for |
👍 |
Doesn't this assume the function being mapped is pure? While that's the case for the higher-level wrappers around mapreduce, it's not necessarily true for general mapreduce. |
That's true, or at least it assumes that We could have a How common is it that you want to pass an impure function to |
I don't expect impure functions to be common with |
74f528c
to
9e777b0
Compare
I noted the possible reuse of return values of |
Looks clear enough to me. |
Faster and more general reductions for sparse matrices
👍 |
We should also document mapreduce for sparse. |
Does it need separate documentation? The interface is the same as |
Adding to NEWS sounds like a good idea - and also the performance improvements deserve a mention in NEWS. |
This hooks sparse matrix reductions into the
mapreduce
/mapreducedim
framework so that reductions based on those two functions (sum
,prod
,maximum
,minimum
,sumabs
,sumabs2
,maxabs
,minabs
, and others) are fast for sparse matrices. I also implemented a method ofcentralize_sumabs2!
, which is the function that computes the reduction forvar
andstd
. This required a tiny tweak tostatistics.jl
.Before:
After:
All times are after warmup.