-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sprand and SparseMatrixCSC constructor simplifications #30617
Conversation
I don't believe that the nanosoldier perf benchmarks have anything relevant here. |
@ViralBShah I am ok with the title change, but note that this is not exclusively about |
I did mean to update the title to reflect that. Will do shortly. |
With this new code, can we get rid of the specialized StridedMatrix constructor, or does the latter still have a substantial performance advantage? |
Unfortunately it is still faster:
master:
PR:
Noting the extra allocation, I tried the following: function SparseMatrixCSC{Tv,Ti}(M::AbstractMatrix) where {Tv,Ti}
@assert !has_offset_axes(M)
nz = count(!iszero, M)
I = Vector{Ti}(undef, nz)
V = Vector{Tv}(undef, nz)
i, j = 0, 0
for v in M
i += 1
if !iszero(v)
I[j += 1] = i
V[j] = v
end
end
return sparse_sortedlinearindices!(I, V, size(M)...)
end And we get much closer! Will try to benchmark better tomorrow.
|
This is probably fast enough to get rid of the specialized Matrix method. Performance-critical code will probably never use these methods anyway—you always make large sparse matrices directly, not from dense matrices. These conversions are just here for convenience for small interactive problems, I think. |
The situation is not clear-cut. Maybe it would be wiser to leave the PR and leave the The above approach is faster for very dense matrices but substantially slower in sparser ones in the master (
above proposal:
PR
in the master (
above proposal:
PR
|
The failed check is a timeout:
|
BTW how would people feel about exporting EDIT: added full qualifications and fixed a mistake |
I've restarted the failing build, but shouldn't really matter since we have enough greens. |
Just to recap: with the PR as-is, we get "PR" for the first set, and "master" for the second set (because we would keep the existent constructor from Besides the simplification and performance improvement, the PR fixes a bug in #30627 (type instability and wrong type return for non 0x0 matrices). Let me know how to proceed! EDIT: rebased. |
* Simpler and faster sparse * remove sparse_IJ_sorted!
4621559
to
8c9653a
Compare
The |
bump? |
@stevengj Are you ok with this PR? |
This PR implements some simplifications to the sparse code made very easy by #30494. In there, sparse random matrices were effectively constructed from a vector of linear indices (which for
sprand
were taken fromrandsubseq
). So I factored out the sparse matrix construction part fromsprand
that can be used now both insprand
and in the fallback sparse constructor fromAbstractMatrix
(which does not catch e.g.Matrix
because there is a more specialized constructor fromStridedMatrix
). I tested that there is no performance regression by checking againstTraspose
. In fact there seems to be a (small) performance gain as (a) there is no need to check for duplicates and (b) using linear indices one avoids the allocation of an extra vector of indices. Is there a way to check for performance regressions more systematically?(net outcome from this PR is -60 lines)
MASTER:
PR: