-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Subtype AbstractArray? #236
Comments
This definitely sounds interesting. However, I don't think it's as simple as it first sounds and I am not sure what the best strategy here is. To elaborate: there are quite a few different The biggest problem, imo, would be the Finally, there is also the using LazyArrays, BenchmarkTools, QuantumOptics, LinearAlgebra
N = 50
A = randn(ComplexF64, N,N); B = randn(ComplexF64, N,N)
C = Kron(A,B) # lazy kronecker product
b = randn(ComplexF64, N^2); c = similar(b);
bgen = GenericBasis(N)
A_op = DenseOperator(bgen, A)
B_op = DenseOperator(bgen, B)
C_op = LazyTensor(bgen^2, [1, 2], [B_op, A_op])
b_ket = Ket(bgen^2, b)
c_ket = copy(b_ket)
mul!(c, C, b)
operators.gemv!(complex(1.0), C_op, b_ket, 0.0im, c_ket)
@assert c ≈ c_ket.data Now, if I am not missing something here, it seems that the currently implemented in-place multiplication in QuantumOptics is much faster: julia> @benchmark mul!(c, C, b)
BenchmarkTools.Trial:
memory estimate: 0 bytes
allocs estimate: 0
--------------
minimum time: 155.144 ms (0.00% GC)
median time: 155.447 ms (0.00% GC)
mean time: 155.668 ms (0.00% GC)
maximum time: 157.180 ms (0.00% GC)
--------------
samples: 33
evals/sample: 1
julia> @benchmark operators.gemv!(complex(1.0), C_op, b_ket, complex(0.0), c_ket)
BenchmarkTools.Trial:
memory estimate: 592 bytes
allocs estimate: 11
--------------
minimum time: 65.707 ms (0.00% GC)
median time: 66.104 ms (0.00% GC)
mean time: 66.711 ms (0.00% GC)
maximum time: 79.897 ms (0.00% GC)
--------------
samples: 75
evals/sample: 1 I would really like to discuss this idea more in order to figure out the best way to go about things here. I am just not sure how to best restructure the type system to do this (especially with the |
Thanks for considering it! By the way, I'm pretty new to Julia and even to programming more generally so I'm sure there are more qualified people to discuss with. For sparse operators, it looks like these's some discussion here about general interfaces for sparse arrays: JuliaLang/julia#25760 which might be relevant (but it doesn't look like there was any resolution there). Also, I don't quite understand what you mean by "I think we essentially need to define our own version of For A related aside: Putting these things together, I thought Operators could be parametric types, parametrized by their bases, and subtypes of |
Thanks for all the references, they are very helpful. Yes, you are probably right in that we can just do it via the data field. Never mind my statement there. Also, in the reference you provided they point to the LinearMaps.jl is indeed precisely the same as the way we handle the I have also though about using StaticArrays as I have rarely found myself wanting to change the dimension of an operator. Also, it's actually not so simple since the bases fields carry the dimension information. Therefore, it might actually make sense to use this as default for the data fields. Alternatively, we can make a new This type parametrerization you mention is subject of the PR I started #234. There, I did exactly what you suggest: namely create one I don't think the decision whether operators are sparse or dense should be given by the basis, though. For example, the position operator is diagonal in the position basis (thus should be stored as sparse), whereas the momentum operator has only nonzero elements. Any basis should allow for both operator types. Also note, that these are all pretty essential changes to the type system. So once we decide on the strategy, it will still take some time to do it properly. |
Ah, I saw that PR awhile ago but for some reason didn't look at it more carefully. Yeah, that's exactly what I was thinking. In fact, now that you point to that, I surely read the issue that points out the checking bases at compile time at some point-- that must be why that was in my head. Sorry. Anyway, I see what you mean with One thing to mention though is the type relation Re: not tying the storage type to the basis-- I see, that makes sense. For the StaticArrays considerations-- it seems like there are performance (or at least compile-time) penalties for large matrices. In the readme they mention "For example, the performance crossover point for a matrix multiply microbenchmark seems to be about 11x11 in julia 0.5 with default optimizations;" for my experience, I got impatient just waiting to evaluate one 20x20 random density matrix on my laptop using StaticArrays based code, and ended up cancelling it after a few minutes. Maybe one could have |
Okay, so to summarize: I like the idea of subtyping operators that are represented by matrices to The remaining open question to me is this: should |
I know I've let this issue go quite stale, but I thought about it a lot. Finally, I decided that subtyping operators to |
I was thinking it might make sense to have operators subtype abstract array and implement the corresponding interface. I think that would make sense conceptually, and would make it easier to use other Julia functions with them. For example, one could then use QuantumOptics Operators in LazyArrays.jl functions immediately, and not have to worry about implementing lazy functions in QuantumOptics too.
Does that seem like a good idea?
The text was updated successfully, but these errors were encountered: