-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
faster sparse(::Diagonal) #23536
faster sparse(::Diagonal) #23536
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the code path already exercised by existing tests?
Lines 1254 to 1255 in b729e58
Lines 1295 to 1308 in b729e58
|
e3b4bd0
to
e0e5a59
Compare
base/sparse/sparsematrix.jl
Outdated
@@ -763,6 +763,11 @@ function sparse(B::Bidiagonal) | |||
return sparse([1:m;1:m-1],[1:m;2:m],[B.dv;B.ev], Int(m), Int(m)) # upper bidiagonal | |||
end | |||
|
|||
function sparse(D::Diagonal{T}) where T | |||
m = length(D.diag) | |||
return SparseMatrixCSC(m, m, [1:m; m+1], [1:m;], convert(Vector{T}, D.diag)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps collect(1:(m+1))
in place of [1:m; m+1]
, and likewise collect(1:m)
in place of [1:m;]
?
julia> m = 1000
1000
julia> @benchmark [1:$m; $m+1]
BenchmarkTools.Trial:
memory estimate: 9.23 KiB
allocs estimate: 46
--------------
minimum time: 5.816 μs (0.00% GC)
median time: 6.441 μs (0.00% GC)
mean time: 8.004 μs (9.65% GC)
maximum time: 444.588 μs (93.81% GC)
--------------
samples: 10000
evals/sample: 6
julia> @benchmark collect(1:$m)
BenchmarkTools.Trial:
memory estimate: 7.97 KiB
allocs estimate: 2
--------------
minimum time: 668.695 ns (0.00% GC)
median time: 1.164 μs (0.00% GC)
mean time: 2.173 μs (31.38% GC)
maximum time: 24.659 μs (72.19% GC)
--------------
samples: 10000
evals/sample: 131
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting, will update.
base/sparse/sparsematrix.jl
Outdated
@@ -763,6 +763,11 @@ function sparse(B::Bidiagonal) | |||
return sparse([1:m;1:m-1],[1:m;2:m],[B.dv;B.ev], Int(m), Int(m)) # upper bidiagonal | |||
end | |||
|
|||
function sparse(D::Diagonal{T}) where T | |||
m = length(D.diag) | |||
return SparseMatrixCSC(m, m, [1:m; m+1], [1:m;], convert(Vector{T}, D.diag)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might the convert(Vector{T}, D.diag)
produce unexpected aliasing if D.diag isa
Vector{T}`?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, but that was intentional from my side 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cheers :). Do other sparse
methods alias their inputs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, because the can't really do that; there are no other arrays that have all their nzvals in one ordered vector like Diagonal
does. Perhaps best to make a copy here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps copying here as well would be for the best then, yes :). Thoughts?
e0e5a59
to
b1e44d9
Compare
Updated, and updated the benchmarks in top post too. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @fredrikekre! :)
Missing method
sparse(::Diagonal)
: