ReducedSpaceEvaluator
As opposed to the FullSpaceEvaluator
, the ReducedSpaceEvaluator
works in the reduced-space induced by the power flow equations. Numerically, this amounts to solving the system of nonlinear equations $g(x, u) =0$ at each iteration, in order to find a state $x(u)$ satisfying
\[g(x(u), u) = 0\]
In short, the reduced-space method satisfies automatically the power flow equations, at the price of solving a system of nonlinear equations at each iteration. The OPF problem reformulates in the reduced-space:
\[\min_{u } \; f(x(u), u) \quad \text{subject to}\quad h_l \leq h(x(u), u) \leq h_u .\]
The state $x$ depends implicitly on the control $u$, and is removed from the optimization variables.
The associated reduced gradient and reduced Hessian can be evaluated using respectively the adjoint and the adjoint-adjoint methods, as described in this paper.
The reduced-space method was one of the first method to solve the OPF, as described in the seminal article by Dommel & Tinney.
Initialization
A ReducedSpaceEvaluator
can be instantiated both from a MATPOWER file:
datafile = joinpath(INSTANCES_DIR, "case9.m")
red = Argos.ReducedSpaceEvaluator(datafile)
A ReducedSpaceEvaluator object
* device: KernelAbstractions.CPU(false)
* #vars: 5
* #cons: 28
* linear solver: ExaPF.LinearSolvers.DirectSolver{SparseArrays.UMFPACK.UmfpackLU{Float64, Int64}}
or equivalently, from a ExaPF.PolarForm
object:
polar = ExaPF.PolarForm(datafile)
red = Argos.ReducedSpaceEvaluator(polar)
A ReducedSpaceEvaluator object
* device: KernelAbstractions.CPU(false)
* #vars: 5
* #cons: 28
* linear solver: ExaPF.LinearSolvers.DirectSolver{SparseArrays.UMFPACK.UmfpackLU{Float64, Int64}}
The ReducedSpaceEvaluator
is somewhat more complicated than the FullSpaceEvaluator
, as it has to solve the power flow equations at each new evaluation. The full signature (with default arguments) is:
ReducedSpaceEvaluator(
model; # MATPOWER file or ExaPF.PolarForm object
line_constraints=true,
powerflow_solver=NewtonRaphson(1e-10),
linear_solver=nothing,
nbatch_hessian=1,
)
In detail:
- The argument
line_constraints
activates the line flows constraints in the model (by defaulttrue
). - The option
powerflow_solver
allows passing a customNewtonRaphson
solver with different verbose level or stopping tolerance (by default the tolerance is set to1e-10
). - The option
linear_solver
specifies a custom linear solver to use in the Newton-Raphson algorithm and in the computation of the reduced derivatives. If set tonothing
(default), Argos fallbacks to UMFPACK on the CPU, CUSOLVERRF on CUDA GPU. In addition, one can use any linear solver compatible with ExaPF, including iterative linear solvers. - Finally, the argument
nbatch_hessian
specifies the number of right-hand-side used when solving a linear systemA X = B
in parallel (useful to streamline the evaluation of the reduced derivatives).
Attributes
Querying the attributes of a ReducedSpaceEvaluator
works similarly to the FullSpaceEvaluator
. One can query the original ExaPF model with
model = Argos.model(red)
Polar formulation (instantiated on device KernelAbstractions.CPU(false))
Network characteristics:
#buses: 9 (#slack: 1 #PV: 2 #PQ: 6)
#generators: 3
#lines: 9
giving a mathematical formulation with:
#controls: 5
#states : 14
The initial variable:
u = Argos.initial(red)
5-element Vector{Float64}:
1.0
1.0
1.0
1.63
0.85
The dimensions of the problem:
(n, m) = Argos.n_variables(red), Argos.n_constraints(red)
(5, 28)
The bounds on the control u
ulb, uub = Argos.bounds(red, Argos.Variables())
[ulb uub]
5×2 Matrix{Float64}:
0.9 1.1
0.9 1.1
0.9 1.1
0.1 3.0
0.1 2.7
The bounds on the constraints:
clb, cub = Argos.bounds(red, Argos.Constraints())
[clb cub]
28×2 Matrix{Float64}:
0.9 1.1
0.9 1.1
0.9 1.1
0.9 1.1
0.9 1.1
0.9 1.1
0.1 2.5
-3.0 3.0
-3.0 3.0
-3.0 3.0
⋮
-Inf 6.25
-Inf 6.25
-Inf 2.25
-Inf 9.0
-Inf 2.25
-Inf 6.25
-Inf 6.25
-Inf 6.25
-Inf 6.25
Note that the number of variables is n=5
, compared to n=19
for the FullSpaceEvaluator
: the state x
has been removed from the formulation.
Reduced callback
Let's have a look at the current cache:
stack = red.stack
[stack.vmag stack.vang]
9×2 Matrix{Float64}:
1.0 0.0
1.0 0.0
1.0 0.0
1.0 0.0
1.0 0.0
1.0 0.0
1.0 0.0
1.0 0.0
1.0 0.0
The voltages are those specified in the MATPOWER file case9.m
.
Now comes the important part. When we call update!
on a FullSpaceEvaluator
, we just copy the values in x
to the internal cache stack
. The behavior is different for ReducedSpaceEvaluator
: in addition to copying the values in u
to the cache stack
, we solve the power flow equations to find the associated implicit state $x(u)$. This is illustrated by the following lines:
u .= 1.1
Argos.update!(red, u)
[stack.vmag stack.vang]
9×2 Matrix{Float64}:
1.1 0.0
1.1 0.0478953
1.1 0.0593198
1.09708 -0.0468226
1.09017 -0.0728149
1.11036 0.00651984
1.09657 -0.0364106
1.10447 -0.00872308
1.07603 -0.0910253
The values stored in u
have been copied to the corresponding entries in stack
(first three entries in stack.vmag
, corresponding to the voltage magnitudes at the REF and the PV nodes). Then, the voltage magnitudes at the PQ nodes and the voltage angles have been updated implicitly so that the values stored in stack
satisfy the power flow equations.
Calling update!
on a ReducedSpaceEvaluator
calls a power flow solver under the hood (which uses the Newton-Raphson algorithm implemented in ExaPF).
The nonlinear solver can be parameterized when instantiating a new ReducedSpaceEvaluator
. For instance, if one wants to display the convergence of the Newton-Raphson algorithm (verbose=1
) and loosen the tolerance in the stopping criterion (tol=1e-6
), one can simply instantiate red
as
red = Argos.ReducedSpaceEvaluator(
datafile;
powerflow_solver=NewtonRaphson(tol=1e-6, verbose=1)
)
A ReducedSpaceEvaluator object
* device: KernelAbstractions.CPU(false)
* #vars: 5
* #cons: 28
* linear solver: ExaPF.LinearSolvers.DirectSolver{SparseArrays.UMFPACK.UmfpackLU{Float64, Int64}}
By calling update!
again, we get:
julia> Argos.update!(red, u)
#it 0: 4.10027e+00 #it 1: 7.58115e-01 #it 2: 1.92339e-02 #it 3: 1.56197e-05 #it 4: 1.07094e-11 Power flow has converged: true * #iterations: 4 * Time Jacobian (s) ........: 0.0001 * Time linear solver (s) ...: 0.0001 * Time total (s) ...........: 0.3838
with a slightly different solution (as we have loosened the tolerance):
stack = red.stack
[stack.vmag stack.vang]
9×2 Matrix{Float64}:
1.1 0.0
1.1 0.0478953
1.1 0.0593198
1.09708 -0.0468226
1.09017 -0.0728149
1.11036 0.00651984
1.09657 -0.0364106
1.10447 -0.00872308
1.07603 -0.0910253
Reduced gradient
As for update!
, evaluating the reduced gradient is slightly more involved in the reduced-space as it involves the solution of one (sparse) linear system involving the Jacobian of the power flow $\nabla_x g(x, u)$. Putting aside this consideration, evaluating the gradient simply amounts to calling:
g = zeros(n)
Argos.gradient!(red, g, u)
g
5-element Vector{Float64}:
-61.61938527066695
-59.06891755088506
-48.990161386865104
-606.4736603611914
208.31272436489508
Reduced Jacobian and Hessian
Similarly, evaluating the reduced Jacobian and the reduced Hessian both involve resp. the solution of $n_u$ and $2n_u$ linear equations.
Evaluating the (dense) reduced Jacobian translates to
J = zeros(m, n)
Argos.jacobian!(red, J, u)
J
28×5 Matrix{Float64}:
0.765729 0.154398 0.15288 -0.00498848 -0.00574179
0.572926 0.17239 0.370376 -0.00549057 -0.00721226
0.151692 0.182267 0.722022 -0.000206518 0.000284287
0.162729 0.498828 0.426092 0.000131103 -0.000881197
0.163486 0.703651 0.19535 -1.50665e-6 -0.00128601
0.586122 0.364173 0.177659 -0.00752157 -0.00689615
-0.023188 -0.0222282 -0.0184355 -0.977079 -0.973396
4.48662 -2.95284 -2.92365 0.0495878 0.0641623
-2.88196 5.09596 -3.44368 0.0567055 0.0226702
-2.85143 -3.42617 4.99578 0.00388203 0.0475052
⋮
0.251124 -0.234971 -0.225662 -1.90713 -1.89898
0.532174 0.0258569 -0.622509 -0.222331 -0.37208
1.07485 -0.12881 -0.654414 0.42332 0.715306
1.30624 1.56953 -2.3377 -0.00177836 2.22628
-0.012056 -1.21248 1.25494 -0.358696 0.383863
-0.0114297 -0.225169 0.250583 0.345942 -0.366566
0.274084 -0.484641 0.327506 2.19461 -0.00215601
-1.09217 1.22837 -0.0288587 0.701128 0.426998
0.452878 -0.713994 0.0765493 -0.778092 -0.472015
and similarly, for the (dense) reduced Hessian
H = zeros(n, n)
Argos.hessian!(red, H, u)
H
5×5 Matrix{Float64}:
3101.12 -1230.83 -1573.41 100.337 105.971
-1230.83 2140.91 -760.654 -60.9243 -11.7018
-1573.41 -760.654 2476.81 -21.0085 -94.5838
100.337 -60.9243 -21.0085 3922.1 2181.62
105.971 -11.7018 -94.5838 2181.62 4668.9
As we will explain later, the computation of the reduced Jacobian and reduced Hessian can be streamlined on the GPU.
Deport on CUDA GPU
Instantiating a ReducedSpaceEvaluator
on an NVIDIA GPU translates to:
using CUDAKernels # suppose CUDAKernels has been downloaded
red = Argos.ReducedSpaceEvaluator(datafile; device=CUDADevice(), nbatch_hessian=256)
The number of batches nbatch_hessian
is the number of right-hand sides used to streamline the solution of the linear systems.