Full-space OPF
Argos is tightly integrated with the nonlinear interior-point solver MadNLP. By default, Argos relies extensively on MadNLP to solve OPF problems, but other optimization solvers can be called using the NLPModels or the MOI interfaces.
We first detail how to solve the OPF in the full-space. We start by instantiating a FullSpaceEvaluator
:
datafile = joinpath(INSTANCES_DIR, "case9.m")
flp = Argos.FullSpaceEvaluator(datafile)
A FullSpaceEvaluator object
* device: KernelAbstractions.CPU(false)
* #vars: 19
* #cons: 36
Instantiating MadNLP
By default, MadNLP is using NLPModels to represent the nonlinear model internally. Hence, one has to convert the FullSpaceEvaluator
to an OPFModel
:
model = Argos.OPFModel(flp)
Argos.OPFModel{Float64, Vector{Float64}, Argos.FullSpaceEvaluator{Float64, Vector{Int64}, Vector{Float64}, Matrix{Float64}, ExaPF.Jacobian{ExaPF.PolarForm{Float64, Vector{Int64}, Vector{Float64}, Matrix{Float64}}, ExaPF.ComposedExpressions{ExaPF.PolarBasis{Vector{Int64}, SparseArrays.SparseMatrixCSC{Float64, Int64}}, ExaPF.MultiExpressions}, ExaPF.NetworkStack{Vector{Float64}, Vector{ForwardDiff.Dual{Nothing, Float64, 8}}, NamedTuple{(:c, :sfp, :sfq, :stp, :stq, :∂edge_vm_fr, :∂edge_vm_to, :∂edge_va_fr, :∂edge_va_to), NTuple{9, Vector{ForwardDiff.Dual{Nothing, Float64, 8}}}}}, Vector{ForwardDiff.Dual{Nothing, Float64, 8}}, SparseArrays.SparseMatrixCSC{Float64, Int64}, Vector{Int64}}, ExaPF.FullHessian{ExaPF.PolarForm{Float64, Vector{Int64}, Vector{Float64}, Matrix{Float64}}, ExaPF.ComposedExpressions{ExaPF.PolarBasis{Vector{Int64}, SparseArrays.SparseMatrixCSC{Float64, Int64}}, ExaPF.MultiExpressions}, ExaPF.NetworkStack{Vector{Float64}, Vector{ForwardDiff.Dual{Nothing, Float64, 8}}, NamedTuple{(:c, :sfp, :sfq, :stp, :stq, :∂edge_vm_fr, :∂edge_vm_to, :∂edge_va_fr, :∂edge_va_to), NTuple{9, Vector{ForwardDiff.Dual{Nothing, Float64, 8}}}}}, Vector{ForwardDiff.Dual{Nothing, Float64, 8}}, SparseArrays.SparseMatrixCSC{Float64, Int64}, Vector{Int64}}}}
Problem name: Generic
All variables: ████████████████████ 19 All constraints: ████████████████████ 36
free: █████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 8 free: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 upper: ██████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 18
low/upp: ████████████⋅⋅⋅⋅⋅⋅⋅⋅ 11 low/upp: ███⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 4
fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 fixed: ████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 14
infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
nnzh: ( 67.89% sparsity) 61 linear: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
nonlinear: ████████████████████ 36
nnzj: ( 74.27% sparsity) 176
Counters:
obj: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 grad: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 cons: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
cons_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 cons_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jcon: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jgrad: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jac: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jac_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jac_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jprod_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jprod_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jtprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jtprod_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jtprod_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 hess: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 hprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jhess: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jhprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
Then, it remains to build a new MadNLPSolver
instance attached to model
:
using MadNLP
solver = MadNLP.MadNLPSolver(model)
Interior point solver
number of variables......................: 19
number of constraints....................: 36
number of nonzeros in lagrangian hessian.: 61
number of nonzeros in constraint jacobian: 176
status...................................: INITIAL
It just remains to solve the OPF with MadNLP:
julia> stats = MadNLP.solve!(solver)
This is MadNLP version v0.8.5, running with Lapack-CPU (BUNCHKAUFMAN) Number of nonzeros in constraint Jacobian............: 176 Number of nonzeros in Lagrangian Hessian.............: 61 Total number of variables............................: 19 variables with only lower bounds: 0 variables with lower and upper bounds: 11 variables with only upper bounds: 0 Total number of equality constraints.................: 14 Total number of inequality constraints...............: 22 inequality constraints with only lower bounds: 0 inequality constraints with lower and upper bounds: 4 inequality constraints with only upper bounds: 18 iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls 0 4.5090275e+03 1.63e+00 1.36e+01 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0 1 5.6815289e+03 2.69e-01 9.94e+01 -1.0 3.30e-01 - 4.12e-02 1.00e+00H 1 2 5.3417452e+03 2.18e-01 1.07e+01 -1.0 5.93e-01 - 8.78e-01 1.00e+00h 1 3 5.3102810e+03 1.26e-02 4.95e-01 -1.0 2.27e-01 - 1.00e+00 1.00e+00h 1 4 5.3082951e+03 4.31e-04 1.11e-01 -1.7 1.72e-02 - 1.00e+00 1.00e+00h 1 5 5.2925234e+03 1.17e-02 1.79e+00 -2.5 5.22e-02 - 8.60e-01 1.00e+00h 1 6 5.2947390e+03 3.20e-03 5.46e-01 -2.5 3.46e-02 - 1.00e+00 1.00e+00h 1 7 5.2979077e+03 6.94e-05 8.13e-04 -2.5 7.49e-03 - 1.00e+00 1.00e+00h 1 8 5.2965361e+03 2.43e-04 6.35e-02 -5.7 1.64e-02 - 9.42e-01 7.25e-01h 1 9 5.2965861e+03 2.52e-04 6.98e-02 -5.7 2.18e-02 - 9.13e-01 7.12e-01h 1 iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls 10 5.2966784e+03 7.01e-05 3.39e-02 -5.7 8.61e-03 - 8.89e-01 1.00e+00h 1 11 5.2966860e+03 7.29e-06 2.10e-04 -5.7 2.78e-03 - 1.00e+00 1.00e+00h 1 12 5.2966867e+03 2.58e-07 7.50e-06 -5.7 5.23e-04 - 1.00e+00 1.00e+00h 1 13 5.2966862e+03 1.20e-08 5.67e-07 -8.6 1.14e-04 - 1.00e+00 1.00e+00h 1 14 5.2966862e+03 1.18e-12 3.33e-11 -8.6 1.12e-06 - 1.00e+00 1.00e+00h 1 Number of Iterations....: 14 (scaled) (unscaled) Objective...............: 6.1017825057066993e+01 5.2966862028703981e+03 Dual infeasibility......: 3.3310243452433497e-11 2.8915141885792965e-09 Constraint violation....: 1.1769474284051284e-12 1.1769474284051284e-12 Complementarity.........: 2.8885453188273948e-11 2.5074178114821132e-09 Overall NLP error.......: 2.5074178114821132e-09 2.5074178114821132e-09 Number of objective function evaluations = 16 Number of objective gradient evaluations = 15 Number of constraint evaluations = 16 Number of constraint Jacobian evaluations = 15 Number of Lagrangian Hessian evaluations = 14 Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 6.206 Total wall-clock secs in linear solver = 0.007 Total wall-clock secs in NLP function evaluations = 0.005 Total wall-clock secs = 6.219 EXIT: Optimal Solution Found (tol = 1.0e-08). "Execution stats: Optimal Solution Found (tol = 1.0e-08)."
Querying the solution
MadNLP returns a MadNLPExecutionStats
object storing the solution. One can query the optimal objective as:
stats.objective
5296.686202870398
and the optimal solution:
stats.solution
19-element Vector{Float64}:
0.08541019351901039
0.05671519595851559
-0.04298613531681531
-0.06949870510848599
0.01052239631297736
-0.02087889070093082
0.015806276659318386
-0.0805511118351121
1.0942215071535495
1.0844484919148971
1.1000000081162027
1.0894894924541931
1.100000008195669
1.0717554446903736
1.0999999845093242
1.0973546364490543
1.0866203145709057
1.3432060073263459
0.9418738041880946
Also, remind that each time the callback update!
is being called, the values are updated internally in the stack stored inside flp
. Hence, an alternative way to query the solution is to directly have a look at the values in the stack. For instance, one can query the optimal values of the voltage
stack = flp.stack
[stack.vmag stack.vang]
9×2 Matrix{Float64}:
1.1 0.0
1.09735 0.0854102
1.08662 0.0567152
1.09422 -0.0429861
1.08445 -0.0694987
1.1 0.0105224
1.08949 -0.0208789
1.1 0.0158063
1.07176 -0.0805511
and of the power generation:
stack.pgen
3-element Vector{Float64}:
0.8979870769892062
1.3432060073263459
0.9418738041880946
The values inside stack
are used to compute the initial point in the optimization routine. Hence, if one calls solve!
again the optimization would start from the optimal solution found in the previous call to solve!
, leading to a different convergence pattern. If one wants to launch a new optimization from scratch without reinitializing all the data structure, we recommend using the reset!
function:
Argos.reset!(flp)
Playing with different parameters
MadNLP has different options we may want to tune when solving the OPF. For instance, we can loosen the tolerance to 1e-5
and set the maximum number of iterations to 5
with:
julia> solver = MadNLP.MadNLPSolver(model; tol=1e-5, max_iter=5)
Interior point solver number of variables......................: 19 number of constraints....................: 36 number of nonzeros in lagrangian hessian.: 61 number of nonzeros in constraint jacobian: 176 status...................................: INITIAL
julia> MadNLP.solve!(solver)
This is MadNLP version v0.8.5, running with Lapack-CPU (BUNCHKAUFMAN) Number of nonzeros in constraint Jacobian............: 176 Number of nonzeros in Lagrangian Hessian.............: 61 Total number of variables............................: 19 variables with only lower bounds: 0 variables with lower and upper bounds: 11 variables with only upper bounds: 0 Total number of equality constraints.................: 14 Total number of inequality constraints...............: 22 inequality constraints with only lower bounds: 0 inequality constraints with lower and upper bounds: 4 inequality constraints with only upper bounds: 18 iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls 0 4.5090275e+03 1.63e+00 1.36e+01 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0 1 5.6815396e+03 2.69e-01 9.94e+01 -1.0 3.30e-01 - 4.12e-02 1.00e+00H 1 2 5.3417426e+03 2.18e-01 1.07e+01 -1.0 5.93e-01 - 8.78e-01 1.00e+00h 1 3 5.3102797e+03 1.26e-02 4.95e-01 -1.0 2.27e-01 - 1.00e+00 1.00e+00h 1 4 5.3082939e+03 4.31e-04 1.11e-01 -1.7 1.72e-02 - 1.00e+00 1.00e+00h 1 5 5.2925192e+03 1.17e-02 1.79e+00 -2.5 5.22e-02 - 8.60e-01 1.00e+00h 1 Number of Iterations....: 5 (scaled) (unscaled) Objective...............: 6.0969821305974421e+01 5.2925192105880569e+03 Dual infeasibility......: 1.7862154614981591e+00 1.5505342547727074e+02 Constraint violation....: 1.1673329416395095e-02 1.1673329416395095e-02 Complementarity.........: 1.0633817611743055e-04 9.2307444546380676e-03 Overall NLP error.......: 2.0577202116458793e-02 1.7862154614981591e+00 Number of objective function evaluations = 7 Number of objective gradient evaluations = 6 Number of constraint evaluations = 7 Number of constraint Jacobian evaluations = 6 Number of Lagrangian Hessian evaluations = 5 Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 0.007 Total wall-clock secs in linear solver = 0.000 Total wall-clock secs in NLP function evaluations = 0.002 Total wall-clock secs = 0.009 EXIT: Maximum Number of Iterations Exceeded. "Execution stats: Maximum Number of Iterations Exceeded."
Most importantly, one may want to use a different sparse linear solver than UMFPACK, employed by default in MadNLP. We recommend using HSL solvers (the installation procedure is detailed here). Once HSL is installed, one can solve the OPF with:
using MadNLPHSL
solver = MadNLP.MadNLPSolver(model; linear_solver=Ma27Solver)
MadNLP.solve!(solver)