The NLPSaUT
module constructs a JuMP
model
for a generic nonlinear program (NLP).
The expected use case is solving a differentiable (either analytically or numerically) nonconvex NLP with gradient-based algorithms such as Ipopt or SNOPT.
The user is expected to provide a "fitness function" (pygmo-style), which evaluates the objective, equality, and inequality constraints. Below is an example:
function f_fitness(x::T...) where {T<:Real}
# compute objective
f = x[1]^2 - x[2]
# equality constraints
h = zeros(T, 1)
h = x[1]^3 + x[2] - 2.4
# inequality constraints
g = zeros(T, 2)
g[2] = -0.3x[1] + x[2] - 2 # y <= 0.3x + 2
g[1] = x[1] + x[2] - 5 # y <= -x + 5
return [f; h; g]
end
Derivatives of f_fitness
is taken using ForwardDiff.jl
(which is the default JuMP
behavior according to its docs); as such, f_fitness
should be written in a way that is compatiable to ForwardDiff.jl
(read here as to why it is ForwardDiff
, not ReverseDiff
).
For reference, here's the JuMP docs page on common mistakes when using ForwardDiff.jl
.
The model
constructed by NLPSaUT
utilizes memoization
to economize on the fitness evaluation (see JuMP Tips and tricks on NLP).
git clone
this repository- start julia-repl
- activate & instantiate package (first time)
pkg> activate .
julia> using Pkg # first time only
julia> Pkg.instantiate() # first time only
- run tests
(NLPSaUT) pkg> test
- To use with SNOPT, it's probably better to go through
GAMS.jl
rather than to useSNOPT7.jl
directly (installingSNOPT7.jl
currently errors on julia v1.10)
For examples, see the examples
directory.
-
Development
- Finite difference gradient option
- Analytical gradient option
-
Examples/documentation
- Simple example with Ipopt
- Simple example with SNOPT via GAMS
- Example with ODEProblem
- Demonstration of memoization benefits
- Parallelized fitness function