Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Making linear solvers independent libraries #40

Merged
merged 17 commits into from
Jul 2, 2021

Conversation

sshin23
Copy link
Member

@sshin23 sshin23 commented Jun 18, 2021

This is a major refactoring of MadNLP.

The goal is to make the following linear solvers independent libraries:

  • HSL (ma27, ma57, ma77, ma86, ma97)
  • Mumps
  • Pardiso
  • Schur/Schwarz
  • LapackGPU
    The independent libraries will be stored in lib.

Umfpack, PardisoMKL, LapackCPU will remain in src as a default solver, but this is subject to change.

@sshin23 sshin23 linked an issue Jun 18, 2021 that may be closed by this pull request
@codecov
Copy link

codecov bot commented Jun 23, 2021

Codecov Report

Merging #40 (b40905b) into master (4788c48) will decrease coverage by 3.04%.
The diff coverage is 88.88%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #40      +/-   ##
==========================================
- Coverage   91.11%   88.07%   -3.05%     
==========================================
  Files          26       28       +2     
  Lines        2950     2959       +9     
==========================================
- Hits         2688     2606      -82     
- Misses        262      353      +91     
Impacted Files Coverage Δ
lib/MadNLPGraph/src/graphtools.jl 100.00% <ø> (ø)
lib/MadNLPGraph/src/schur.jl 80.23% <ø> (ø)
lib/MadNLPGraph/src/schwarz.jl 76.27% <ø> (ø)
lib/MadNLPHSL/src/ma27.jl 76.19% <ø> (ø)
lib/MadNLPHSL/src/ma57.jl 75.40% <ø> (ø)
lib/MadNLPHSL/src/ma77.jl 88.15% <ø> (ø)
lib/MadNLPHSL/src/ma86.jl 84.61% <ø> (ø)
lib/MadNLPHSL/src/ma97.jl 83.67% <ø> (ø)
lib/MadNLPHSL/src/mc68.jl 100.00% <ø> (ø)
lib/MadNLPKrylov/src/MadNLPKrylov.jl 0.00% <ø> (ø)
... and 31 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 4788c48...b40905b. Read the comment docs.

@sshin23
Copy link
Member Author

sshin23 commented Jun 24, 2021

@frapac Refactoring seems almost done! In this PR the following changes are made:

  • HSL, Mumps, LapackGPU, Pardiso, Schwarz/Schur, Krylov are now stored in separate packages. Each package is not necessarily a linear solver, but I classified them based on their practical usage (e.g., MadNLPGraphs store the Plasmo interface and graph-based solvers MadNLPSchwarz and MadNLPSchur). So now the core MadNLP has very few dependencies.
  • Each package has own test where the user can locally Pkg.test(package). And the CI does testing all the packages at once. The script for testing is stored in .ci
  • The new packages are stored in lib, as we discussed before. Laster these packages can be registered in the General julia registry.

Please feel free to check this branch and let me know what you think.

Copy link
Collaborator

@frapac frapac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is ❤️
The loading time of MadNLP has been reduced by a lot with this PR. I am a fan of the new structure.
I make a few comments in the PR. In short, I think we need to address:

  1. Do we really need submodule in the linear algebra wrapper? Like defining both MadNLPGPU and MadNLPLapackGPU?
  2. How and when to register the subpackages? This would allow to simplify the CI
  3. I would suggest also rebasing the git commits, to avoid the long history associated with the different CI attempts

.ci/ci.jl Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved

include("lapackgpu.jl")

export MadNLPLapackGPU
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need to define another submodule there: MadNLPLapackGPU is redundant with the parent module MadNLPGPU, which is right now an empty shell

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The intention of making an empty shell is to allow further extending the MadNLPGPU package in the future. I was thinking that we may want to try using cuSOLVER's incomplete LU along with Krylov. Also, this MadNLPGPU module may also host NonlinearProgram data structures for GPU. With this approach, we don't need to have too many glue packages.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I get it, it makes total sense indeed

module MadNLPGPU

import CUDA: CUBLAS, CUSOLVER, CuVector, CuMatrix, toolkit_version, R_64F
import MadNLP:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This import are quite complicated to read, in my opinion.
How about importing directly a LinearSolvers submodule?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, the empty linear solver is not declared as a submodule (see here). Would there be a benefit of making it a module? We will still need to import LinearSolvers: introduce, factorize!, solver!, ...

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The advantage is mostly in clarifying the code: using LinearSolvers as a submodule, we could import only the submodule and then overload directly the different functions. Then, we know directly where are the functions we are overloading (otherwise, we should check all the import at the beginning of the file to understand what we are doing exactly)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, that makes sense. I'll make the changes

@@ -125,7 +125,7 @@ if toolkit_version() >= v"11.3.1"
end
else
is_inertia(M::Solver) = M.opt.lapackgpu_algorithm == BUNCHKAUFMAN
inertia(M::Solver) = LapackCPU.inertia(M.etc[:fact_cpu],M.etc[:ipiv_cpu],M.etc[:info_cpu][])
inertia(M::Solver) = MadNLPLapackCPU.inertia(M.etc[:fact_cpu],M.etc[:ipiv_cpu],M.etc[:info_cpu][])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally, we should overload directly the templates implemented in LinearSolvers. E.g.

LinearSolvers.inertia(M::Solver) = ...

@@ -137,8 +137,13 @@ end

introduce(::Solver)="pardiso"

function __init__()
check_deps()
# try
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe remove the dead comments?

@@ -0,0 +1,195 @@
module MadNLPTests
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good idea!

@@ -2,6 +2,6 @@
# Created by Sungho Shin (sungho.shin@wisc.edu)

include("MOI_interface.jl")
include("Plasmo_interface.jl")
# include("Plasmo_interface.jl")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove dead comment there?

@@ -1,14 +1,7 @@
using Test, MadNLP, JuMP, Plasmo
using Test, MadNLP, MadNLPTests, MINLPTests
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea to use MINLPTests there!

test/runtests.jl Outdated
end

@testset "MadNLP test" begin
include("nlp_test.jl")
@testset "MINNLP test" begin
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a typo? MINNLP -> MINLP

@sshin23 sshin23 force-pushed the ss/refactor_linear_solvers branch 2 times, most recently from cc00a5e to f54bbb0 Compare June 27, 2021 02:29
@sshin23 sshin23 force-pushed the ss/refactor_linear_solvers branch from a7a684e to d035fe4 Compare July 2, 2021 15:15
@sshin23
Copy link
Member Author

sshin23 commented Jul 2, 2021

Now all the extension packages are registered and working.

One thing though, is that MadNLPGPU is not working with CUDA 3.3. Not sure exactly why but might be because of the new DnXsytrs that was recently implemented. For now, we fall back to CUDA 3.2, and this will be addressed in a separate PR.

@sshin23 sshin23 merged commit 96a2fe6 into master Jul 2, 2021
@frapac frapac deleted the ss/refactor_linear_solvers branch March 8, 2022 17:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Restructure dependencies in MadNLP
2 participants