Reduced-space OPF
Solving the OPF in the reduced-space (aka reduce-then-linearize) follows the same procedure as in the full-space.
We start by instantiating a ReducedSpaceEvaluator
:
datafile = joinpath(INSTANCES_DIR, "case9.m")
red = Argos.ReducedSpaceEvaluator(datafile)
A ReducedSpaceEvaluator object
* device: KernelAbstractions.CPU(false)
* #vars: 5
* #cons: 28
* linear solver: ExaPF.LinearSolvers.DirectSolver{SparseArrays.UMFPACK.UmfpackLU{Float64, Int64}}
Instantiating MadNLP
As before, we wrap the ReducedSpaceEvaluator
object red
with an OPFModel
:
model = Argos.OPFModel(red)
Argos.OPFModel{Float64, Vector{Float64}, Argos.ReducedSpaceEvaluator{Float64, Vector{Int64}, Vector{Float64}, Matrix{Float64}, ExaPF.Jacobian{ExaPF.PolarForm{Float64, Vector{Int64}, Vector{Float64}, Matrix{Float64}}, ExaPF.ComposedExpressions{ExaPF.PolarBasis{Vector{Int64}, SparseArrays.SparseMatrixCSC{Float64, Int64}}, ExaPF.PowerFlowBalance{Vector{Float64}, SparseArrays.SparseMatrixCSC{Float64, Int64}}}, ExaPF.NetworkStack{Vector{Float64}, Vector{ForwardDiff.Dual{Nothing, Float64, 7}}, NamedTuple{(:c, :sfp, :sfq, :stp, :stq, :∂edge_vm_fr, :∂edge_vm_to, :∂edge_va_fr, :∂edge_va_to), NTuple{9, Vector{ForwardDiff.Dual{Nothing, Float64, 7}}}}}, Vector{ForwardDiff.Dual{Nothing, Float64, 7}}, SparseArrays.SparseMatrixCSC{Float64, Int64}, Vector{Int64}}, ExaPF.Jacobian{ExaPF.PolarForm{Float64, Vector{Int64}, Vector{Float64}, Matrix{Float64}}, ExaPF.ComposedExpressions{ExaPF.PolarBasis{Vector{Int64}, SparseArrays.SparseMatrixCSC{Float64, Int64}}, ExaPF.PowerFlowBalance{Vector{Float64}, SparseArrays.SparseMatrixCSC{Float64, Int64}}}, ExaPF.NetworkStack{Vector{Float64}, Vector{ForwardDiff.Dual{Nothing, Float64, 2}}, NamedTuple{(:c, :sfp, :sfq, :stp, :stq, :∂edge_vm_fr, :∂edge_vm_to, :∂edge_va_fr, :∂edge_va_to), NTuple{9, Vector{ForwardDiff.Dual{Nothing, Float64, 2}}}}}, Vector{ForwardDiff.Dual{Nothing, Float64, 2}}, SparseArrays.SparseMatrixCSC{Float64, Int64}, Vector{Int64}}, ExaPF.Jacobian{ExaPF.PolarForm{Float64, Vector{Int64}, Vector{Float64}, Matrix{Float64}}, ExaPF.ComposedExpressions{ExaPF.PolarBasis{Vector{Int64}, SparseArrays.SparseMatrixCSC{Float64, Int64}}, ExaPF.MultiExpressions}, ExaPF.NetworkStack{Vector{Float64}, Vector{ForwardDiff.Dual{Nothing, Float64, 5}}, NamedTuple{(:c, :sfp, :sfq, :stp, :stq, :∂edge_vm_fr, :∂edge_vm_to, :∂edge_va_fr, :∂edge_va_to), NTuple{9, Vector{ForwardDiff.Dual{Nothing, Float64, 5}}}}}, Vector{ForwardDiff.Dual{Nothing, Float64, 5}}, SparseArrays.SparseMatrixCSC{Float64, Int64}, Vector{Int64}}, ExaPF.FullHessian{ExaPF.PolarForm{Float64, Vector{Int64}, Vector{Float64}, Matrix{Float64}}, ExaPF.ComposedExpressions{ExaPF.PolarBasis{Vector{Int64}, SparseArrays.SparseMatrixCSC{Float64, Int64}}, ExaPF.MultiExpressions}, ExaPF.NetworkStack{Vector{Float64}, Vector{ForwardDiff.Dual{Nothing, Float64, 8}}, NamedTuple{(:c, :sfp, :sfq, :stp, :stq, :∂edge_vm_fr, :∂edge_vm_to, :∂edge_va_fr, :∂edge_va_to), NTuple{9, Vector{ForwardDiff.Dual{Nothing, Float64, 8}}}}}, Vector{ForwardDiff.Dual{Nothing, Float64, 8}}, SparseArrays.SparseMatrixCSC{Float64, Int64}, Vector{Int64}}, Argos.Reduction{Vector{Float64}, Argos.ImplicitSensitivity{SparseArrays.UMFPACK.UmfpackLU{Float64, Int64}, SparseArrays.SparseMatrixCSC{Float64, Int64}}}}}
Problem name: Generic
All variables: ████████████████████ 5 All constraints: ████████████████████ 28
free: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 free: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 upper: █████████████⋅⋅⋅⋅⋅⋅⋅ 18
low/upp: ████████████████████ 5 low/upp: ████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 10
fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
nnzh: ( 0.00% sparsity) 15 linear: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
nonlinear: ████████████████████ 28
nnzj: ( 0.00% sparsity) 140
Counters:
obj: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 grad: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 cons: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
cons_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 cons_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jcon: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jgrad: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jac: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jac_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jac_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jprod_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jprod_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jtprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jtprod_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jtprod_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 hess: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 hprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
jhess: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jhprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
Why we should not use MadNLP's default parameters?
Suppose now we instantiate a default MadNLP instance:
using MadNLP
solver = MadNLP.MadNLPSolver(model)
Interior point solver
number of variables......................: 5
number of constraints....................: 28
number of nonzeros in lagrangian hessian.: 15
number of nonzeros in constraint jacobian: 140
status...................................: INITIAL
By default, MadNLP is using a sparse data structure. This is not appropriate to handle a ReducedSpaceEvaluator
, which generates a dense Jacobian and a dense reduced Hessian. Indeed, by default MadNLP generates a sparse KKT system with a significant number of nonzeroes:
MadNLP.get_kkt(solver.kkt)
5×5 Matrix{Float64}:
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
How to parameterize MadNLP to use dense data structure?
Instead, we can parameterize MadNLP to use a dense KKT system. The first option is to use a DENSE_KKT_SYSTEM
in conjunction with a dense linear solver (as Lapack):
solver = MadNLP.MadNLPSolver(
model;
kkt_system=MadNLP.DenseKKTSystem,
linear_solver=LapackCPUSolver,
)
MadNLP.get_kkt(solver.kkt)
61×61 Matrix{Float64}:
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 … 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 … 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
⋮ ⋮ ⋱ ⋮ ⋮
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 … 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 … 0.0 0.0 0.0 0.0 0.0 0.0 0.0
The associated KKT system is now represented as a dense matrix, more appropriate for a dense problem. However, the generated KKT system is still too large, as its size is proportional to both the number of variables and the number of constraints. This approach is not tractable on larger problems. Fortunately, MadNLP allows to condense the KKT system using a Schur complement approach. By doing so, the size of the KKT system is only proportional to the number of variables (here, 5
):
solver = MadNLP.MadNLPSolver(
model;
kkt_system=MadNLP.DenseCondensedKKTSystem,
linear_solver=LapackCPUSolver,
)
MadNLP.get_kkt(solver.kkt)
5×5 Matrix{Float64}:
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
This alternative approach permits to significantly compress the size of the KKT system, and is the approach used by default in run_opf
.
Once the problem is written in condensed form inside MadNLP, one can solve the OPF in the reduced-space with:
julia> stats = MadNLP.solve!(solver; tol=1e-6)
The options set during resolve may not have an effect This is MadNLP version v0.8.5, running with Lapack-CPU (BUNCHKAUFMAN) Number of nonzeros in constraint Jacobian............: 140 Number of nonzeros in Lagrangian Hessian.............: 15 Total number of variables............................: 5 variables with only lower bounds: 0 variables with lower and upper bounds: 5 variables with only upper bounds: 0 Total number of equality constraints.................: 0 Total number of inequality constraints...............: 28 inequality constraints with only lower bounds: 0 inequality constraints with lower and upper bounds: 10 inequality constraints with only upper bounds: 18 iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls 0 5.4383237e+03 0.00e+00 1.23e+01 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0 1 5.4083555e+03 2.86e-03 1.02e+01 -1.0 1.02e-01 2.0 9.89e-01 1.00e+00f 1 2 5.3563152e+03 1.15e-02 7.73e+00 -1.0 2.32e-01 1.5 1.00e+00 1.00e+00h 1 3 5.3424883e+03 1.21e-03 5.74e+00 -1.0 6.45e-02 1.9 1.00e+00 1.00e+00h 1 4 5.3217784e+03 3.35e-03 4.26e+00 -1.0 1.44e-01 1.5 1.00e+00 1.00e+00h 1 5 5.3055202e+03 8.48e-03 3.13e+00 -1.0 2.06e-01 1.0 1.00e+00 1.00e+00h 1 6 5.3014051e+03 2.30e-04 1.31e+01 -1.0 4.08e-01 - 6.63e-01 1.00e+00H 1 7 5.3005684e+03 7.75e-02 6.98e-01 -1.0 2.67e-01 - 8.42e-01 1.00e+00h 1 8 5.2981109e+03 1.60e-03 2.74e-02 -1.7 8.00e-02 - 1.00e+00 1.00e+00h 1 9 5.2968550e+03 6.92e-04 3.67e-01 -3.8 2.15e-02 - 9.41e-01 7.31e-01h 1 iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls 10 5.2967296e+03 4.34e-04 7.80e-01 -3.8 2.09e-02 - 9.42e-01 6.60e-01h 1 11 5.2966970e+03 1.62e-04 2.32e-01 -3.8 1.23e-02 - 8.86e-01 1.00e+00h 1 12 5.2966910e+03 1.01e-05 2.37e-04 -3.8 3.07e-03 - 1.00e+00 1.00e+00h 1 13 5.2966862e+03 1.55e-06 5.50e-04 -5.7 1.18e-03 - 1.00e+00 9.96e-01h 1 14 5.2966861e+03 1.42e-08 4.36e-05 -5.7 1.15e-04 - 1.00e+00 1.00e+00h 1 15 5.2966861e+03 7.56e-13 1.50e-04 -5.7 9.17e-07 - 1.00e+00 1.00e+00h 1 16 5.2966861e+03 2.67e-13 1.34e-04 -5.7 1.44e-07 - 1.00e+00 1.00e+00h 1 17 5.2966861e+03 8.10e-15 3.14e-05 -5.7 1.40e-07 - 1.00e+00 1.00e+00h 1 18 5.2966861e+03 9.86e-15 1.46e-05 -5.7 7.65e-09 - 1.00e+00 1.00e+00H 1 19 5.2966860e+03 8.58e-11 1.78e-03 -8.0 9.88e-06 - 1.00e+00 1.00e+00h 1 iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls 20 5.2966860e+03 4.24e-11 1.58e-03 -8.0 1.84e-06 - 1.00e+00 1.00e+00h 1 21 5.2966860e+03 9.99e-13 3.94e-04 -8.0 1.79e-06 - 1.00e+00 1.00e+00h 1 22 5.2966860e+03 8.14e-14 1.84e-04 -8.0 9.25e-08 - 1.00e+00 1.00e+00h 1 23 5.2966860e+03 7.52e-15 1.46e-05 -8.0 4.27e-08 - 1.00e+00 1.00e+00h 1 24 5.2966860e+03 4.33e-15 4.26e-06 -8.0 3.41e-09 - 1.00e+00 1.00e+00h 1 25 5.2966860e+03 7.06e-15 4.77e-07 -8.0 9.91e-10 - 1.00e+00 1.00e+00H 1 Number of Iterations....: 25 (scaled) (unscaled) Objective...............: 5.8305314552427592e+02 5.2966860269272174e+03 Dual infeasibility......: 4.7657338564377483e-07 4.3293816557236605e-06 Constraint violation....: 7.0573972013598940e-15 7.0573972013598940e-15 Complementarity.........: 1.1007886737022546e-09 1.0000000911333248e-08 Overall NLP error.......: 5.2460653729546606e-08 4.7657338564377483e-07 Number of objective function evaluations = 29 Number of objective gradient evaluations = 26 Number of constraint evaluations = 29 Number of constraint Jacobian evaluations = 26 Number of Lagrangian Hessian evaluations = 25 Total wall-clock secs in solver (w/o fun. eval./lin. alg.) = 3.174 Total wall-clock secs in linear solver = 0.000 Total wall-clock secs in NLP function evaluations = 0.348 Total wall-clock secs = 3.522 EXIT: Optimal Solution Found (tol = 1.0e-06). "Execution stats: Optimal Solution Found (tol = 1.0e-06)."
We recommend changing the default tolerance to be above the tolerance of the Newton-Raphson used inside ReducedSpaceEvaluator
. Indeed, the power flow is solved only approximately, leading to slightly inaccurate evaluations and derivatives, impacting the convergence of the interior-point algorithm. In general, we recommend setting tol=1e-5
.
Here, we are using Lapack on the CPU to solve the condensed KKT system at each iteration of the interior-point algorithm. However, if an NVIDIA GPU is available, we recommend using a CUDA-accelerated Lapack version, more efficient than the default Lapack. If MadNLPGPU
is installed, this amounts to
using MadNLPGPU
solver = MadNLP.MadNLPSolver(
model;
kkt_system=MadNLP.DenseCondensedKKTSystem,
linear_solver=LapackGPUSolver,
tol=1e=5,
)
MadNLP.solve!(solver)
Querying the solution
As before, we can query the solution returned by MadNLP with:
stats.objective
5296.686026927217
and
stats.solution
5-element Vector{Float64}:
1.1000010889428922
1.0973556418640533
1.0866213390199266
1.3432060069202107
0.941873799868245
Or, alternatively, one can look directly at the values in the stack
stored inside red
:
stack = red.stack
[stack.vmag stack.vang]
9×2 Matrix{Float64}:
1.1 0.0
1.09736 0.08541
1.08662 0.0567151
1.09422 -0.042986
1.08445 -0.0694986
1.1 0.0105224
1.08949 -0.0208788
1.1 0.0158063
1.07176 -0.0805509