You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When A is a sparse matrix of Float32 type, feeding into klu will result in a Float64 factorized matrix. Is it possible to stick to Float32 for the whole symbolic and numerical factorization process?
The text was updated successfully, but these errors were encountered:
No, unfortunately Float64 and ComplexF64 are the only supported numeric types. See Section 5 of https://fossies.org/linux/SuiteSparse/KLU/Doc/KLU_UserGuide.pdf. I'll have a port of KLU to Julia native out as soon as I find time to polish it. Which will support any numeric type.
I'll leave this open for now to remind me to ping here when I release the port.
When
A
is a sparse matrix ofFloat32
type, feeding intoklu
will result in aFloat64
factorized matrix. Is it possible to stick toFloat32
for the whole symbolic and numerical factorization process?The text was updated successfully, but these errors were encountered: