JuMP Interface (Experimental)
JuMP to an ExaModel
We have an experimental interface to JuMP model. A JuMP model can be directly converted to a ExaModel
. It is as simple as this:
using ExaModels, JuMP, CUDA
N = 10
jm = Model()
@variable(jm, x[i=1:N], start = mod(i, 2) == 1 ? -1.2 : 1.0)
@constraint(
jm,
s[i=1:(N-2)],
3x[i+1]^3 + 2x[i+2] - 5 + sin(x[i+1] - x[i+2])sin(x[i+1] + x[i+2]) + 4x[i+1] -
x[i]exp(x[i] - x[i+1]) - 3 == 0.0
)
@objective(jm, Min, sum(100(x[i-1]^2 - x[i])^2 + (x[i-1] - 1)^2 for i = 2:N))
em = ExaModel(jm; backend = CUDABackend())
An ExaModel{Float64, CUDA.CuArray{Float64, 1, CUDA.DeviceMemory}, ...}
Problem name: Generic
All variables: ████████████████████ 10 All constraints: ████████████████████ 8
free: ████████████████████ 10 free: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 fixed: ████████████████████ 8
infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
nnzh: (-212.73% sparsity) 172 linear: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
nonlinear: ████████████████████ 8
nnzj: ( 0.00% sparsity) 80
Here, note that only scalar objective/constraints created via @constraint
and @objective
API are supported. Older syntax like @NLconstraint
and @NLobjective
are not supported. We can solve the model using any of the solvers supported by ExaModels. For example, we can use MadNLP:
using MadNLPGPU
result = madnlp(em)
"Execution stats: Optimal Solution Found (tol = 1.0e-04)."
JuMP Optimizer
Alternatively, one can use the Optimizer
interface provided by ExaModels
. This feature can be used as follows.
using ExaModels, JuMP, CUDA
using MadNLPGPU
set_optimizer(jm, () -> ExaModels.MadNLPOptimizer(CUDABackend()))
optimize!(jm)
This is MadNLP version v0.8.7, running with cuDSS v0.4.0
Number of nonzeros in constraint Jacobian............: 80
Number of nonzeros in Lagrangian Hessian.............: 172
Total number of variables............................: 10
variables with only lower bounds: 0
variables with lower and upper bounds: 0
variables with only upper bounds: 0
Total number of equality constraints.................: 8
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 2.0570000e+03 2.48e+01 1.00e+02 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 2.0569995e+03 2.48e+01 3.00e-03 -1.0 2.27e+00 - 1.00e+00 2.00e-07h 2
2 1.1078126e+03 1.50e+01 2.26e-03 -1.0 2.25e+00 - 1.00e+00 1.00e+00h 1
3 1.1173898e+02 2.13e+00 5.34e-03 -1.0 2.13e+00 - 1.00e+00 1.00e+00h 1
4 6.4940357e+00 1.16e-01 4.63e-04 -1.0 1.79e-01 - 1.00e+00 1.00e+00h 1
5 6.2324800e+00 1.49e-03 1.34e-04 -2.5 5.66e-02 - 1.00e+00 1.00e+00h 1
6 6.2324586e+00 7.67e-07 4.00e-05 -5.0 1.10e-03 - 1.00e+00 1.00e+00h 1
Number of Iterations....: 6
(scaled) (unscaled)
Objective...............: 7.8692659507578377e-01 6.2324586330002081e+00
Dual infeasibility......: 4.0032681415010313e-05 3.1705883680688171e-04
Constraint violation....: 7.6712653553449573e-07 7.6712653553449573e-07
Complementarity.........: 2.5266981870896623e-07 2.0011449641750129e-06
Overall NLP error.......: 5.0546314917942304e-06 4.0032681415010313e-05
Number of objective function evaluations = 8
Number of objective gradient evaluations = 7
Number of constraint evaluations = 8
Number of constraint Jacobian evaluations = 7
Number of Lagrangian Hessian evaluations = 6
Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 0.104
Total wall-clock secs in linear solver = 0.084
Total wall-clock secs in NLP function evaluations = 0.016
Total wall-clock secs = 0.204
EXIT: Optimal Solution Found (tol = 1.0e-04).
Again, only scalar objective/constraints created via @constraint
and @objective
API are supported. Older syntax like @NLconstraint
and @NLobjective
are not supported.
This page was generated using Literate.jl.