Biegler's method
Solving the OPF in the reduced-space allows us to compress drastically the size of the OPF problem, but this comes with several downsides: (1) the power flow equations have to be solved at each iteration, (2) we have to evaluate explicitly the dense reduced Hessian and dense reduced Jacobian and (3) the solution is not so accurate (tolerance set to tol=1e-6
) as the power flow equations are solved only approximately.
Alternatively, one can use Biegler's method, which reduces the KKT system directly in the full-space instead of working in the reduced-space. In exact arithmetic, this method is exactly equivalent to the full-space method, avoiding all the reduced-space machinery.
As this method is working in the full-space, we start by instantiating a new FullSpaceEvaluator
:
datafile = joinpath(INSTANCES_DIR, "case9.m")
flp = Argos.FullSpaceEvaluator(datafile)
A FullSpaceEvaluator object
* device: KernelAbstractions.CPU(false)
* #vars: 19
* #cons: 36
and wrap the resulting evaluator flp
in a OPFModel
:
model = Argos.OPFModel(flp)
Argos.OPFModel{Float64, Vector{Float64}, Argos.FullSpaceEvaluator{Float64, Vector{Int64}, Vector{Float64}, Matrix{Float64}, ExaPF.Jacobian{ExaPF.PolarForm{Float64, Vector{Int64}, Vector{Float64}, Matrix{Float64}}, ExaPF.ComposedExpressions{ExaPF.PolarBasis{Vector{Int64}, SparseArrays.SparseMatrixCSC{Float64, Int64}}, ExaPF.MultiExpressions}, ExaPF.NetworkStack{Vector{Float64}, Vector{ForwardDiff.Dual{Nothing, Float64, 8}}, NamedTuple{(:c, :sfp, :sfq, :stp, :stq, :∂edge_vm_fr, :∂edge_vm_to, :∂edge_va_fr, :∂edge_va_to), NTuple{9, Vector{ForwardDiff.Dual{Nothing, Float64, 8}}}}}, Vector{ForwardDiff.Dual{Nothing, Float64, 8}}, SparseArrays.SparseMatrixCSC{Float64, Int64}, Vector{Int64}}, ExaPF.FullHessian{ExaPF.PolarForm{Float64, Vector{Int64}, Vector{Float64}, Matrix{Float64}}, ExaPF.ComposedExpressions{ExaPF.PolarBasis{Vector{Int64}, SparseArrays.SparseMatrixCSC{Float64, Int64}}, ExaPF.MultiExpressions}, ExaPF.NetworkStack{Vector{Float64}, Vector{ForwardDiff.Dual{Nothing, Float64, 8}}, NamedTuple{(:c, :sfp, :sfq, :stp, :stq, :∂edge_vm_fr, :∂edge_vm_to, :∂edge_va_fr, :∂edge_va_to), NTuple{9, Vector{ForwardDiff.Dual{Nothing, Float64, 8}}}}}, Vector{ForwardDiff.Dual{Nothing, Float64, 8}}, SparseArrays.SparseMatrixCSC{Float64, Int64}, Vector{Int64}}}}
Problem name: Generic
All variables: ████████████████████ 19 All constraints: ████████████████████ 36
free: █████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 8 free: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 upper: ██████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 18
low/upp: ████████████⋅⋅⋅⋅⋅⋅⋅⋅ 11 low/upp: ███⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 4
fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 fixed: ████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 14
infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
nnzh: ( 67.89% sparsity) 61 linear: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
nonlinear: ████████████████████ 36
nnzj: ( 74.27% sparsity) 176
Counters:
obj: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 grad: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 cons: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
cons_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 cons_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jcon: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jgrad: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jac: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jac_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jac_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jprod_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jprod_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jtprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jtprod_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jtprod_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 hess: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 hprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jhess: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jhprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
Instantiating MadNLP
Instantiating MadNLP manually to use a BieglerKKTSystem
is a little bit more involved, as one has to parameterize the type directly, by defining:
KKT = Argos.BieglerKKTSystem{Float64, Vector{Int}, Vector{Float64}, Matrix{Float64}}
Argos.BieglerKKTSystem{Float64, Vector{Int64}, Vector{Float64}, Matrix{Float64}}
and we instantiate MadNLP with:
using MadNLP
T = Float64
VI = Vector{Int}
VT = Vector{T}
MT = Matrix{T}
solver = MadNLP.MadNLPSolver(
model;
kkt_system=Argos.BieglerKKTSystem{T, VI, VT, MT},
linear_solver=LapackCPUSolver,
callback=MadNLP.SparseCallback,
)
Interior point solver
number of variables......................: 19
number of constraints....................: 36
number of nonzeros in lagrangian hessian.: 61
number of nonzeros in constraint jacobian: 176
status...................................: INITIAL
Note that we are again using Lapack as linear solver: indeed the resulting Biegler's KKT system is dense (we use the same condensification procedure as in the reduced-space):
MadNLP.get_kkt(solver.kkt)
5×5 Matrix{Float64}:
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
Once solver
is instantiated, we can solve the OPF in the full-space using the same syntax as usual:
julia> MadNLP.solve!(solver)
This is MadNLP version v0.8.5, running with Lapack-CPU (BUNCHKAUFMAN) Number of nonzeros in constraint Jacobian............: 176 Number of nonzeros in Lagrangian Hessian.............: 61 Total number of variables............................: 19 variables with only lower bounds: 0 variables with lower and upper bounds: 11 variables with only upper bounds: 0 Total number of equality constraints.................: 14 Total number of inequality constraints...............: 22 inequality constraints with only lower bounds: 0 inequality constraints with lower and upper bounds: 4 inequality constraints with only upper bounds: 18 iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls 0 4.5090275e+03 1.63e+00 1.36e+01 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0 1 5.6815289e+03 2.69e-01 9.94e+01 -1.0 3.30e-01 - 4.12e-02 1.00e+00H 1 2 5.3417452e+03 2.18e-01 1.07e+01 -1.0 5.93e-01 - 8.78e-01 1.00e+00h 1 3 5.3102810e+03 1.26e-02 4.95e-01 -1.0 2.27e-01 - 1.00e+00 1.00e+00h 1 4 5.3082951e+03 4.31e-04 1.11e-01 -1.7 1.72e-02 - 1.00e+00 1.00e+00h 1 5 5.2925234e+03 1.17e-02 1.79e+00 -2.5 5.22e-02 - 8.60e-01 1.00e+00h 1 6 5.2947390e+03 3.20e-03 5.46e-01 -2.5 3.46e-02 - 1.00e+00 1.00e+00h 1 7 5.2979077e+03 6.94e-05 8.13e-04 -2.5 7.49e-03 - 1.00e+00 1.00e+00h 1 8 5.2965361e+03 2.43e-04 6.35e-02 -5.7 1.64e-02 - 9.42e-01 7.25e-01h 1 9 5.2965861e+03 2.52e-04 6.98e-02 -5.7 2.18e-02 - 9.13e-01 7.12e-01h 1 iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls 10 5.2966784e+03 7.01e-05 3.39e-02 -5.7 8.61e-03 - 8.89e-01 1.00e+00h 1 11 5.2966860e+03 7.29e-06 2.10e-04 -5.7 2.78e-03 - 1.00e+00 1.00e+00h 1 12 5.2966867e+03 2.58e-07 7.50e-06 -5.7 5.23e-04 - 1.00e+00 1.00e+00h 1 13 5.2966862e+03 1.20e-08 5.67e-07 -8.6 1.14e-04 - 1.00e+00 1.00e+00h 1 14 5.2966862e+03 1.18e-12 3.34e-11 -8.6 1.12e-06 - 1.00e+00 1.00e+00h 1 Number of Iterations....: 14 (scaled) (unscaled) Objective...............: 6.1017825057066993e+01 5.2966862028703981e+03 Dual infeasibility......: 3.3423930290155113e-11 2.9013828376870756e-09 Constraint violation....: 1.1763923168928159e-12 1.1763923168928159e-12 Complementarity.........: 2.8885453188259404e-11 2.5074178114808510e-09 Overall NLP error.......: 2.5074178114808510e-09 2.5074178114808510e-09 Number of objective function evaluations = 16 Number of objective gradient evaluations = 15 Number of constraint evaluations = 16 Number of constraint Jacobian evaluations = 15 Number of Lagrangian Hessian evaluations = 14 Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 13.258 Total wall-clock secs in linear solver = 0.068 Total wall-clock secs in NLP function evaluations = 0.001 Total wall-clock secs = 13.327 EXIT: Optimal Solution Found (tol = 1.0e-08). "Execution stats: Optimal Solution Found (tol = 1.0e-08)."
Note that we get the exact same convergence as in the full-space.