Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

snoop precompile and package load times #122

Merged
merged 9 commits into from
Apr 23, 2023
Merged

Conversation

goulart-paul
Copy link
Member

Improves TTFX by :

  • Removing or slimming down some unneeded package dependencies. In particular:
    -- Pkg.jl was only being used to get the version number from Project.toml. Now done with the lighter weight TOML.jl.
    -- DataFrames / PrettyTables were only used to print the solver settings - removed in favor of a handwritten Base.show that produces the same output.
    -- Statistics was removed since it was only used for mean.

Load time for the package is now quite a bit shorter, particularly if MathOptInterface is already loaded. The only remaining slow loading dependency is StaticArrays.jl, which we require for exponential and power cone problems.

TTFX for MOI / JuMP is much better but still a few seconds. I have, I think, done more or less the same as that suggested by @odow in this discussion but using JuMP still seems to result in a lot of compilation time around the JuMP optimize! in particular.

Example:

julia> using MathOptInterface

julia> @time using Clarabel
  1.947500 seconds (3.20 M allocations: 229.875 MiB, 5.55% gc time, 0.26% compilation time)

julia> @time begin
                  A = P = sparse(I(1).*1.);
                  c = b = [1.];
                  cones = Clarabel.SupportedCone[Clarabel.ZeroConeT(1)];
                  settings = Clarabel.Settings(verbose=false)
                  solver   = Clarabel.Solver(P,c,A,b,cones,settings)
                  Clarabel.solve!(solver);
          end
  0.019641 seconds (2.05 k allocations: 108.984 KiB, 91.02% compilation time)

One strange thing is that loading JuMP seems to partly invalidate the precompilation relating to MOI. The MathOptInterface compilation is done via the function __precompile_moi(). Making a fresh start of julia, this happens:

julia> using Clarabel, MathOptInterface

julia> @time Clarabel.__precompile_moi();
  0.231454 seconds (106.61 k allocations: 11.682 MiB, 13.17% gc time, 40.09% compilation time)

julia> using JuMP

julia> @time Clarabel.__precompile_moi();
  0.617509 seconds (102.33 k allocations: 5.533 MiB, 99.68% compilation time: 7% of which was recompilation)

In other words, the presence of JuMP in the environment seems to force substantial recompilation even if it is not used.

If I then actually run a small JuMP example, I get something like this:

@time begin 
           model = JuMP.Model(Clarabel.Optimizer)
           set_optimizer_attribute(model, "verbose",false)  
           @variable(model, x[1:2])
           @objective(model, Min, x[1] + 2*x[2]^2)
           @constraint(model,c1,x[1] >= 1.)
           @constraint(model,c2, x[1] + x[2] == 1.)
           optimize!(model)
       end
 1.839821 seconds (2.22 M allocations: 144.478 MiB, 3.76% gc time, 95.52% compilation time: 34% of which was recompilation)

About 1/3 of that time seems to come from the @variable macro, and most of the rest from optimize!(model).

@odow
Copy link
Contributor

odow commented Mar 13, 2023

Yes, we're still missing a step in the JuMP->Solver compilation chain, because solvers don't have JuMP as a dependency and JuMP doesn't have the solvers as a dependency.

If you made a ClarabelJuMP.jl package, then you could eliminate the compilation latency.

@codecov-commenter
Copy link

codecov-commenter commented Mar 13, 2023

Codecov Report

Patch coverage: 67.85% and project coverage change: -1.50 ⚠️

Comparison is base (50b0c6e) 82.22% compared to head (866bf12) 80.73%.

📣 This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more

Additional details and impacted files
@@            Coverage Diff             @@
##             main     jump-dev/JuMP.jl#122      +/-   ##
==========================================
- Coverage   82.22%   80.73%   -1.50%     
==========================================
  Files          37       39       +2     
  Lines        2701     2839     +138     
==========================================
+ Hits         2221     2292      +71     
- Misses        480      547      +67     
Impacted Files Coverage Δ
src/cones/coneops_defaults.jl 22.72% <ø> (ø)
src/equilibration.jl 97.95% <ø> (ø)
src/precompile.jl 0.00% <0.00%> (ø)
src/settings.jl 4.44% <0.00%> (-3.56%) ⬇️
src/variables.jl 77.64% <ø> (+11.64%) ⬆️
src/version.jl 28.57% <0.00%> (ø)
src/Clarabel.jl 55.55% <42.85%> (-44.45%) ⬇️
src/types.jl 85.71% <70.00%> (-0.65%) ⬇️
src/MOI_wrapper/MOI_wrapper.jl 84.41% <80.00%> (-0.56%) ⬇️
src/presolver.jl 94.73% <94.73%> (ø)
... and 12 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

@goulart-paul goulart-paul changed the base branch from main to dev/v0.5.0 April 23, 2023 13:07
@goulart-paul goulart-paul merged commit 5473645 into dev/v0.5.0 Apr 23, 2023
@goulart-paul goulart-paul deleted the pg/precompile branch April 23, 2023 13:25
goulart-paul added a commit that referenced this pull request Apr 25, 2023
* snoop precompile and package load times (#122)

* Pg/sdp rust port (#125)

* Support for PSDTriangleConeT with dimension zero (#126)

* Support BigFloat SDPs (#127)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants