-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Making linear solvers independent libraries #40
Conversation
Codecov Report
@@ Coverage Diff @@
## master #40 +/- ##
==========================================
- Coverage 91.11% 88.07% -3.05%
==========================================
Files 26 28 +2
Lines 2950 2959 +9
==========================================
- Hits 2688 2606 -82
- Misses 262 353 +91
Continue to review full report at Codecov.
|
@frapac Refactoring seems almost done! In this PR the following changes are made:
Please feel free to check this branch and let me know what you think. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is ❤️
The loading time of MadNLP has been reduced by a lot with this PR. I am a fan of the new structure.
I make a few comments in the PR. In short, I think we need to address:
- Do we really need submodule in the linear algebra wrapper? Like defining both
MadNLPGPU
andMadNLPLapackGPU
? - How and when to register the subpackages? This would allow to simplify the CI
- I would suggest also rebasing the git commits, to avoid the long history associated with the different CI attempts
lib/MadNLPGPU/src/MadNLPGPU.jl
Outdated
|
||
include("lapackgpu.jl") | ||
|
||
export MadNLPLapackGPU |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need to define another submodule there: MadNLPLapackGPU
is redundant with the parent module MadNLPGPU
, which is right now an empty shell
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The intention of making an empty shell is to allow further extending the MadNLPGPU
package in the future. I was thinking that we may want to try using cuSOLVER's incomplete LU along with Krylov. Also, this MadNLPGPU
module may also host NonlinearProgram data structures for GPU. With this approach, we don't need to have too many glue packages.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I get it, it makes total sense indeed
module MadNLPGPU | ||
|
||
import CUDA: CUBLAS, CUSOLVER, CuVector, CuMatrix, toolkit_version, R_64F | ||
import MadNLP: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This import are quite complicated to read, in my opinion.
How about importing directly a LinearSolvers
submodule?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently, the empty linear solver is not declared as a submodule (see here). Would there be a benefit of making it a module? We will still need to import LinearSolvers: introduce, factorize!, solver!, ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The advantage is mostly in clarifying the code: using LinearSolvers
as a submodule, we could import only the submodule and then overload directly the different functions. Then, we know directly where are the functions we are overloading (otherwise, we should check all the import
at the beginning of the file to understand what we are doing exactly)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, that makes sense. I'll make the changes
lib/MadNLPGPU/src/lapackgpu.jl
Outdated
@@ -125,7 +125,7 @@ if toolkit_version() >= v"11.3.1" | |||
end | |||
else | |||
is_inertia(M::Solver) = M.opt.lapackgpu_algorithm == BUNCHKAUFMAN | |||
inertia(M::Solver) = LapackCPU.inertia(M.etc[:fact_cpu],M.etc[:ipiv_cpu],M.etc[:info_cpu][]) | |||
inertia(M::Solver) = MadNLPLapackCPU.inertia(M.etc[:fact_cpu],M.etc[:ipiv_cpu],M.etc[:info_cpu][]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally, we should overload directly the templates implemented in LinearSolvers
. E.g.
LinearSolvers.inertia(M::Solver) = ...
@@ -137,8 +137,13 @@ end | |||
|
|||
introduce(::Solver)="pardiso" | |||
|
|||
function __init__() | |||
check_deps() | |||
# try |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe remove the dead comments?
@@ -0,0 +1,195 @@ | |||
module MadNLPTests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good idea!
src/Interfaces/interfaces.jl
Outdated
@@ -2,6 +2,6 @@ | |||
# Created by Sungho Shin (sungho.shin@wisc.edu) | |||
|
|||
include("MOI_interface.jl") | |||
include("Plasmo_interface.jl") | |||
# include("Plasmo_interface.jl") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove dead comment there?
@@ -1,14 +1,7 @@ | |||
using Test, MadNLP, JuMP, Plasmo | |||
using Test, MadNLP, MadNLPTests, MINLPTests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea to use MINLPTests
there!
test/runtests.jl
Outdated
end | ||
|
||
@testset "MadNLP test" begin | ||
include("nlp_test.jl") | ||
@testset "MINNLP test" begin |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a typo? MINNLP
-> MINLP
cc00a5e
to
f54bbb0
Compare
a7a684e
to
d035fe4
Compare
Now all the extension packages are registered and working. One thing though, is that MadNLPGPU is not working with CUDA 3.3. Not sure exactly why but might be because of the new DnXsytrs that was recently implemented. For now, we fall back to CUDA 3.2, and this will be addressed in a separate PR. |
This is a major refactoring of MadNLP.
The goal is to make the following linear solvers independent libraries:
The independent libraries will be stored in
lib
.Umfpack, PardisoMKL, LapackCPU will remain in
src
as a default solver, but this is subject to change.