Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with new JuMP operator for problem HS87 #195

Closed
tmigot opened this issue Sep 1, 2024 · 13 comments · Fixed by #197
Closed

Issue with new JuMP operator for problem HS87 #195

tmigot opened this issue Sep 1, 2024 · 13 comments · Fixed by #197
Labels
bug Something isn't working

Comments

@tmigot
Copy link
Member

tmigot commented Sep 1, 2024

I got the following issue when updating OptimizationProblems.jl to more recent versions of JuMP:

using JuMP, NLPModelsJuMP
function hs87(args...; kwargs...)
  nlp = Model()
  x0 = [390, 1000, 419.5, 340.5, 198.175, 0.5]
  lvar = [0, 0, 340, 340, -1000, 0]
  uvar = [400, 1000, 420, 420, 10000, 0.5236]
  @variable(nlp, lvar[i] <= x[i = 1:6] <= uvar[i], start = x0[i])

  a = 131078 // 1000
  b = 148577 // 100000
  ci = 90798 // 100000
  d = cos(147588 // 100000)
  e = sin(147588 // 100000)

  @constraint(nlp, 300 - x[1] - 1 / a * x[3] * x[4] * cos(b - x[6]) + ci / a * d * x[3] == 0)
  @constraint(nlp, -x[2] - 1 / a * x[3] * x[4] * cos(b + x[6]) + ci / a * d * x[4]^2 == 0)
  @constraint(nlp, -x[5] - 1 / a * x[3] * x[4] * cos(b + x[6]) + ci / a * e * x[4]^2 == 0)
  @constraint(nlp, 200 - 1 / a * x[3] * x[4] * sin(b - x[6]) + ci / a * e * x[3]^2 == 0)

  function f1(t)
    return if 0 <= t <= 300
      30 * t
    elseif 300 <= t <= 400
      31 * t
    else
      eltype(x)(Inf)
    end
  end

  function f2(t)
    return if 0 <= t <= 100
      28 * t
    elseif 100 <= t <= 200
      29 * t
    elseif 200 <= t <= 1000
      30 * t
    else
      eltype(t)(Inf)
    end
  end
  @operator(nlp, op_f1, 1, f1)
  @expression(nlp, op_f1)
  @operator(nlp, op_f2, 1, f2)
  @expression(nlp, op_f2)
  @objective(nlp, Min, op_f1(x[1]) + op_f2(x[2]))

  return nlp
end
nlp_jump = MathOptNLPModel(hs87())

which return the following error

ERROR: MathOptInterface.UnsupportedNonlinearOperator: The nonlinear operator `:op_f1` is not supported by the model.
Stacktrace:
  [1] _get_node_type(data::MathOptInterface.Nonlinear.Model, x::MathOptInterface.ScalarNonlinearFunction)
    @ MathOptInterface.Nonlinear .julia\packages\MathOptInterface\jqDoD\src\Nonlinear\parse.jl:76
  [2] _parse_without_recursion_inner(stack::Vector{…}, data::MathOptInterface.Nonlinear.Model, expr::MathOptInterface.Nonlinear.Expression, x::MathOptInterface.ScalarNonlinearFunction, parent::Int64)
    @ MathOptInterface.Nonlinear .julia\packages\MathOptInterface\jqDoD\src\Nonlinear\parse.jl:80
  [3]
    @ MathOptInterface.Nonlinear .julia\packages\MathOptInterface\jqDoD\src\Nonlinear\parse.jl:49
  [4] parse_expression
    @ .julia\packages\MathOptInterface\jqDoD\src\Nonlinear\parse.jl:14 [inlined]
  [5] set_objective(model::MathOptInterface.Nonlinear.Model, obj::MathOptInterface.ScalarNonlinearFunction)
    @ MathOptInterface.Nonlinear .julia\packages\MathOptInterface\jqDoD\src\Nonlinear\model.jl:63
  [6] _nlp_model(model::MathOptInterface.Utilities.CachingOptimizer{…})
    @ NLPModelsJuMP .julia\packages\NLPModelsJuMP\9DyFu\src\utils.jl:463
  [7] _nlp_block(model::MathOptInterface.Utilities.CachingOptimizer{…})
    @ NLPModelsJuMP .julia\packages\NLPModelsJuMP\9DyFu\src\utils.jl:472
  [8] nlp_model(moimodel::MathOptInterface.Utilities.CachingOptimizer{…}; hessian::Bool, name::String)
    @ NLPModelsJuMP .julia\packages\NLPModelsJuMP\9DyFu\src\moi_nlp_model.jl:35
  [9] nlp_model
    @ .julia\packages\NLPModelsJuMP\9DyFu\src\moi_nlp_model.jl:30 [inlined]
 [10] #MathOptNLPModel#21
    @ .julia\packages\NLPModelsJuMP\9DyFu\src\moi_nlp_model.jl:27 [inlined]
 [11] MathOptNLPModel(moimodel::MathOptInterface.Utilities.CachingOptimizer{…})
    @ NLPModelsJuMP .julia\packages\NLPModelsJuMP\9DyFu\src\moi_nlp_model.jl:26
 [12] MathOptNLPModel(jmodel::Model; kws::@Kwargs{})
    @ NLPModelsJuMP .julia\packages\NLPModelsJuMP\9DyFu\src\moi_nlp_model.jl:23
 [13] MathOptNLPModel(jmodel::Model)
    @ NLPModelsJuMP .julia\packages\NLPModelsJuMP\9DyFu\src\moi_nlp_model.jl:21
 [14] top-level scope
    @ REPL[30]:1
Some type information was truncated. Use `show(err)` to see complete types.

any idea?

@amontoison
Copy link
Member

@operator is the new @register, right?
@blegat probably knows how to fix this issue.

@blegat
Copy link
Contributor

blegat commented Sep 2, 2024

We need to get the list of UserDefinedFunction and set it to the nlp_model as it's done here:
https://github.com/jump-dev/Ipopt.jl/blob/4c156461ef1fda3c9f015520197afda4e8ca3e26/src/MOI_wrapper.jl#L606-L611

@tmigot
Copy link
Member Author

tmigot commented Sep 2, 2024

Thanks @blegat , so to clarify we should add something like this

attr = MOI.ListOfSupportedNonlinearOperators()
MOI.Nonlinear.register_operator(nlp_model, MOI.get(model, attr)) 

in the appropriate place, is that correct?

@amontoison
Copy link
Member

I'm wondering if we don't have a bug in MOI:

using JuMP, MathOptInterface
function hs87(args...; kwargs...)
  nlp = Model()
  x0 = [390, 1000, 419.5, 340.5, 198.175, 0.5]
  lvar = [0, 0, 340, 340, -1000, 0]
  uvar = [400, 1000, 420, 420, 10000, 0.5236]
  @variable(nlp, lvar[i] <= x[i = 1:6] <= uvar[i], start = x0[i])

  a = 131078 // 1000
  b = 148577 // 100000
  ci = 90798 // 100000
  d = cos(147588 // 100000)
  e = sin(147588 // 100000)

  @constraint(nlp, 300 - x[1] - 1 / a * x[3] * x[4] * cos(b - x[6]) + ci / a * d * x[3] == 0)
  @constraint(nlp, -x[2] - 1 / a * x[3] * x[4] * cos(b + x[6]) + ci / a * d * x[4]^2 == 0)
  @constraint(nlp, -x[5] - 1 / a * x[3] * x[4] * cos(b + x[6]) + ci / a * e * x[4]^2 == 0)
  @constraint(nlp, 200 - 1 / a * x[3] * x[4] * sin(b - x[6]) + ci / a * e * x[3]^2 == 0)

  function f1(t)
    return if 0 <= t <= 300
      30 * t
    elseif 300 <= t <= 400
      31 * t
    else
      eltype(x)(Inf)
    end
  end

  function f2(t)
    return if 0 <= t <= 100
      28 * t
    elseif 100 <= t <= 200
      29 * t
    elseif 200 <= t <= 1000
      30 * t
    else
      eltype(t)(Inf)
    end
  end
  @operator(nlp, op_f1, 1, f1)
  @expression(nlp, op_f1)
  @operator(nlp, op_f2, 1, f2)
  @expression(nlp, op_f2)
  @objective(nlp, Min, op_f1(x[1]) + op_f2(x[2]))

  return nlp
end
nlp = hs87()
moi_backend = backend(nlp)

MOI.get(moi_backend, MOI.ListOfSupportedNonlinearOperators())
ERROR: MathOptInterface.GetAttributeNotAllowed{MathOptInterface.ListOfSupportedNonlinearOperators}: Getting attribute MathOptInterface.ListOfSupportedNonlinearOperators() cannot be performed: Cannot query MathOptInterface.ListOfSupportedNonlinearOperators() from `Utilities.CachingOptimizer` because no optimizer is attached (the state is `NO_OPTIMIZER`). You may want to use a `CachingOptimizer` in `AUTOMATIC` mode or you may need to call `reset_optimizer` before doing this operation if the `CachingOptimizer` is in `MANUAL` mode.
julia> moi_backend.mode
AUTOMATIC::CachingOptimizerMode = 1

@amontoison
Copy link
Member

amontoison commented Sep 4, 2024

The sentence "You may want to use a `CachingOptimizer` in `AUTOMATIC` mode or you may need to call `reset_optimizer` before doing this operation if the `CachingOptimizer` is in `MANUAL` mode." seems wrong in that case.

@amontoison
Copy link
Member

@odow May I ask if I'm doing something wrong in the code snippet above? #195 (comment)

@odow
Copy link
Contributor

odow commented Sep 6, 2024

We can't get the list of supported operators because you haven't selected a solver

@odow
Copy link
Contributor

odow commented Sep 6, 2024

We could perhaps return something meaningful for this case where you have a CachingOptimizer with no optimizer attached.

@amontoison
Copy link
Member

amontoison commented Sep 6, 2024

Do you have a trick for us so that we can extract the operators and expect that they are supported?
I don't know how much it differs from @register, so I may be completely wrong.
I just wanted to do this kind of thing: https://github.com/JuliaSmoothOptimizers/NLPModelsJuMP.jl/pull/197/files#diff-47c27891e951c8cd946b850dc2df31082624afdf57446c21cb6992f5f4b74aa2R454-R459

Benoît implemented an optimizer but all JuMP models in OptimizationProblems.jl don't have an optimizer attached.

@odow
Copy link
Contributor

odow commented Sep 6, 2024

I just wanted to do this kind of thing: https://github.com/JuliaSmoothOptimizers/NLPModelsJuMP.jl/pull/197/files#diff-47c27891e951c8cd946b850dc2df31082624afdf57446c21cb6992f5f4b74aa2R454-R459

That code is wrong. It returns a Vector{Symbol}. I should add an example to the docstring:
https://github.com/jump-dev/MathOptInterface.jl/blob/5f5acaaffb3d0703f21d647605c87cf10a2796d0/src/attributes.jl#L2050-L2056

You need to implement support for MOI.UserDefinedFunction. Here's Ipopt:
https://github.com/jump-dev/Ipopt.jl/blob/4c156461ef1fda3c9f015520197afda4e8ca3e26/src/MOI_wrapper.jl#L600-L620

It might be faster if I just take a look and make a PR 😄

@odow
Copy link
Contributor

odow commented Sep 6, 2024

This should be enough to point you in the right direction: #197 (comment)

I didn't test other than the hs87 example.

@amontoison
Copy link
Member

Thanks a lot @odow!!! 😃

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants