Quick Start

This page introduces the first steps to set up ExaPF.jl. We show how to load a power network instance and how to solve the power flow equations both on the CPU and on the GPU. The full script is implemented in test/quickstart.jl.

We start by importing CUDA and KernelAbstractions:

using CUDA
using KernelAbstractions

Then, we load ExaPF and its submodules with

using ExaPF
import ExaPF: AutoDiff
const PS = ExaPF.PowerSystem
const LS = ExaPF.LinearSolvers

Short version

ExaPF loads instances from the pglib-opf benchmark. ExaPF contains an artifact defined in Artifacts.toml that is built from the ExaData repository containing Exascale Computing Project relevant test cases. You may set a data file using

datafile = joinpath(artifact"ExaData", "ExaData", "case1354.m")

The powerflow equations can be solved in three lines of code, as

julia> polar = ExaPF.PolarForm(datafile, CPU())  # Load dataPolar formulation (instantiated on device CPU(false))
Network characteristics:
    #buses:      1354  (#slack: 1  #PV: 259  #PQ: 1094)
    #generators: 260
    #lines:      1991
giving a mathematical formulation with:
    #controls:   519
    #states  :   2447
julia> stack = ExaPF.NetworkStack(polar) # Load variables2968-elements NetworkStack{Vector{Float64}}
julia> convergence = run_pf(polar, stack; verbose=1)#it 0: 1.15103e+02 #it 1: 1.50328e+01 #it 2: 5.88242e-01 #it 3: 4.88493e-03 #it 4: 1.39924e-06 #it 5: 1.57368e-11 Power flow has converged: true * #iterations: 5 * Time Jacobian (s) ........: 0.9865 * Time linear solver (s) ...: 0.0046 * update (s) ............: 0.0041 * ldiv (s) ..............: 0.0006 * Time total (s) ...........: 1.8605

Implicitly, ExaPF has just proceed to the following operations:

  • instantiate automatically a starting point $x_0$ from the variables stored in stack
  • instantiate the Jacobian of the powerflow equations using AutoDiff.
  • solve the powerflow equations iteratively, using a Newton-Raphson algorithm.

This compact syntax allows to solve quickly any powerflow equations in a few lines a code. However, in most case, the user may want more coarse grained control on the different objects manipulated.

Detailed version

In what follows, we detail step by step the detailed procedure to solve the powerflow equations.

How to load a MATPOWER instance as a PowerNetwork object?

We start by importing a MATPOWER instance to a ExaPF.PowerSystem.PowerNetwork object:

julia> pf = PS.PowerNetwork(datafile)PowerNetwork object with:
    Buses: 1354 (Slack: 1. PV: 259. PQ: 1094)
    Generators: 260.

The different fields of the object pf specify the characteristics of the network. For instance, we can retrieve the number of buses or get the indexing of the PV buses with

julia> nbus = PS.get(pf, PS.NumberOfBuses())1354
julia> pv_indexes = pf.pv;

However, a ExaPF.PowerSystem.PowerNetwork object stores only the physical attributes of the network. To choose a given mathematical formulation, we need to pass the object pf to an ExaPF.AbstractFormulation layer. Currently, only the polar formulation is provided with the ExaPF.PolarForm structure. In the future, other formulations (e.g. RectangularForm) may be implemented as well.

How to solve the powerflow equations?

To solve the powerflow equations, we need to choose a given mathematical formulation for the equations of the network. To each formulation corresponds a given state $x$ and control $u$. Using polar representation of the voltage vector, such as $\bm{v} = |v|e^{j \theta}$, each bus $i=1, \cdots, N_B$ must satisfy the power balance equations:

\[\begin{aligned} p_i &= v_i \sum_{j}^{n} v_j (g_{ij}\cos{(\theta_i - \theta_j)} + b_{ij}\sin{(\theta_i - \theta_j})) \,, \\ q_i &= v_i \sum_{j}^{n} v_j (g_{ij}\sin{(\theta_i - \theta_j)} - b_{ij}\cos{(\theta_i - \theta_j})) \,. \end{aligned}\]

The powerflow equations rewrite in the abstract mathematical formalism:

\[g(x, u) = 0.\]

For a given control $u$, solving the powerflow equations resumes to find a state $x(u)$ such that $g(x(u), u) = 0$.

To this goal, ExaPF.jl implements a Newton-Raphson algorithm that allows to solve the powerflow equations in a few lines of code. We first instantiate a PolarForm object to adopt a polar formulation as a model:

julia> polar = ExaPF.PolarForm(pf, CPU())Polar formulation (instantiated on device CPU(false))
Network characteristics:
    #buses:      1354  (#slack: 1  #PV: 259  #PQ: 1094)
    #generators: 260
    #lines:      1991
giving a mathematical formulation with:
    #controls:   519
    #states  :   2447

Note that the constructor ExaPF.PolarForm takes as input a ExaPF.PowerSystem.PowerNetwork object and a KernelAbstractions.jl device (here set to CPU() by default). We will explain in the next section how to load a ExaPF.PolarForm object on the GPU with the help of a CUDABackend().

The Newton-Raphson solves the equation $g(x, u) = 0$ in an iterative fashion. The algorithm solves at each step the linear equation:

\[ x_{k+1} = - (\nabla_x g_k)^{-1} g(x_k, u).\]

Hence, the algorithm requires the following elements:

  • an initial variable $x_0$
  • a function to solve efficiently the linear system $(\nabla_x g_k) x_{k+1} = g(x_k, u)$
  • a function to evaluate the Jacobian $\nabla_x g_k$

The variable $x$ is instantiated as:

julia> stack = ExaPF.NetworkStack(polar)2968-elements NetworkStack{Vector{Float64}}

The function $g$ is implemented using ExaPF's custom modeler:

julia> basis = ExaPF.PolarBasis(polar)PolarBasis (AbstractExpression)
julia> powerflow = ExaPF.PowerFlowBalance(polar) ∘ basisExaPF.ComposedExpressions{ExaPF.PolarBasis{Vector{Int64}, SparseArrays.SparseMatrixCSC{Float64, Int64}}, ExaPF.PowerFlowBalance{Vector{Float64}, SparseArrays.SparseMatrixCSC{Float64, Int64}}}(PolarBasis (AbstractExpression), PowerFlowBalance (AbstractExpression))

The Jacobian $\nabla_x g$ is evaluated automatically using forward-mode AutoDiff:

julia> mapx = ExaPF.mapping(polar, State());
julia> jx = ExaPF.Jacobian(polar, powerflow, mapx)A AutoDiff Jacobian for ExaPF.ComposedExpressions{ExaPF.PolarBasis{Vector{Int64}, SparseArrays.SparseMatrixCSC{Float64, Int64}}, ExaPF.PowerFlowBalance{Vector{Float64}, SparseArrays.SparseMatrixCSC{Float64, Int64}}}(PolarBasis (AbstractExpression), PowerFlowBalance (AbstractExpression)) Number of Jacobian colors: 25

The (direct) linear solver can be instantiated directly as

julia> linear_solver = LS.DirectSolver(jx.J);

Let's explain further these three objects.

  • stack is a AbstractStack storing all the variables attached to the formulation polar::PolarForm.
  • jx is a Jacobian structure which allows the solver to compute efficiently the Jacobian of the powerflow equations $\nabla_x g$ using AutoDiff.
  • linear_solver specifies the linear algorithm uses to solve the linear system $(\nabla_x g_k) x_{k+1} = g(x_k, u)$. By default, we use direct linear algebra.

In the AutoDiff Jacobian jx, the evaluation of the Jacobian $J$ is stored in jx.J:

julia> jac = jx.J2447×2447 SparseArrays.SparseMatrixCSC{Float64, Int64} with 15803 stored entries:
⎡⢿⣷⣲⣷⣯⣷⣣⢯⣮⣌⢫⡿⣿⣲⢡⣾⡾⣶⣰⣷⡧⣾⣵⣿⢺⣵⣧⣙⣿⢿⣟⡮⢴⣗⣷⣶⣾⣿⡱⡧⎤
⎢⢼⣾⣿⣿⣿⡺⣶⣿⣟⢷⣲⣿⣷⣿⣳⡟⢹⡿⣿⢿⢓⣻⡗⣾⢼⡿⡿⣆⣾⣾⣯⣗⢿⠃⡻⣿⡿⣟⢚⣧⎥
⎢⢯⣿⣻⡻⢟⣵⢿⣟⣇⣾⣵⣿⡿⡷⣿⣿⢿⣿⢿⢟⣿⡽⢿⣾⢟⣾⣷⣫⣾⣿⣿⡾⣯⣾⣟⡿⡷⣿⣿⠿⎥
⎢⡭⣞⣼⣿⣿⢷⡿⣯⣧⣿⢿⣾⣷⣷⡫⣧⣻⣻⣵⢾⡹⣻⡿⣿⢿⣿⣹⣾⣿⣿⣶⡟⣻⣌⣿⣮⣷⡏⣿⡇⎥
⎢⡊⢿⢿⣝⣩⣽⣭⣿⣿⢟⣻⣿⡷⢿⢽⣯⣿⢾⣼⡻⢽⣼⢯⡻⣽⢿⡿⣟⣿⣿⠿⣿⣿⣽⡷⣿⣽⡯⣧⣿⎥
⎢⣯⡶⣼⣾⣵⣿⣻⣷⣿⣾⡛⣬⣛⣿⢻⣽⠽⣯⣿⣳⣙⣿⣾⣝⣿⣴⡿⣿⢳⣽⣻⡯⣯⡿⢧⣻⣾⣏⣻⢗⎥
⎢⢻⣻⣽⣿⢿⡯⢽⣿⣽⣏⣿⣼⡿⣯⣞⣷⡿⣹⡽⣗⣾⣟⢵⡭⣿⡯⣷⣹⣧⣻⢿⣷⣳⣳⠯⣿⣯⣷⣾⡝⎥
⎢⣡⣶⣽⠾⣿⣿⠯⣮⡷⣷⣟⣶⢾⣽⣻⣾⣷⣷⢿⣿⣾⣳⣿⣿⢵⣾⢿⣾⣻⣦⢯⣞⣿⣿⣿⢿⣟⣿⣟⣜⎥
⎢⢺⣯⣷⡶⣿⣷⣿⣺⣻⣟⡷⣧⣟⣫⢽⣿⡻⣮⣿⣳⢻⣿⣷⢿⢟⣞⣯⣷⣻⣾⣓⣩⣿⣟⢿⣾⣟⡮⣺⣿⎥
⎢⢴⣾⣿⣟⣿⢗⣱⣟⣶⡻⢿⣻⢷⢯⣿⣷⢿⣻⢿⢗⣚⣞⣾⣒⣿⣷⣖⡷⣿⡿⡼⣿⣿⣾⣿⣯⣿⣖⢺⡹⎥
⎢⣩⣯⣽⣰⣟⡿⣷⣪⣓⣷⣷⣼⣾⢿⢾⣻⣿⣶⣺⢼⡿⣯⣿⢿⣗⣝⣹⣿⢷⣷⡾⡷⢟⢿⣷⣲⡯⣞⣿⣿⎥
⎢⣵⣿⣹⣭⣻⣷⣿⣯⣯⡳⣞⢿⡕⡷⣿⣿⣽⣟⢺⢻⣿⣟⣿⣿⢯⣟⣟⢶⣾⣻⣻⣿⣷⢯⣿⣟⡟⣿⡽⡻⎥
⎢⢞⣶⣶⡷⣻⣵⣿⣷⣷⣟⢛⣿⡿⡿⣱⣷⣻⢵⢿⣿⣝⢽⣯⢷⣿⣿⣽⣟⣻⣿⢿⠏⣹⣦⡿⢶⣿⣏⡿⢇⎥
⎢⣍⢻⠻⢯⡽⣻⣳⣾⣿⢯⣿⣯⣝⣻⣻⣷⢯⣿⢼⡽⣷⣾⢻⣝⣷⢿⣟⣽⡻⣭⣛⣽⢿⡿⡿⣳⣾⣗⣷⡻⎥
⎢⣿⣟⣺⣿⣾⣿⣿⣿⣿⣿⣝⣶⣭⣻⠻⣾⣻⣾⣿⡿⢽⣷⣾⣻⣿⣾⡟⣮⣻⣾⣝⡷⢷⡟⣧⣾⣿⡯⢿⣲⎥
⎢⡻⡽⢯⢿⣻⡿⣼⠿⣿⣧⡿⡾⢿⣷⣫⢷⡝⣸⣶⣯⢾⡯⣿⣾⡿⠗⣟⣼⢷⡽⣿⣿⣹⣿⣗⡿⣦⣷⣿⣎⎥
⎢⢴⢷⠿⠓⣫⣿⡛⢾⣟⣿⣯⡿⢽⣺⣿⣿⣿⢿⣻⣿⣿⣕⡽⣟⠳⣾⣿⡷⣽⠷⣷⣾⣿⣿⣿⢿⣿⣿⣯⣶⎥
⎢⢹⣿⣿⣮⣿⡽⡻⣿⣽⣯⣭⣳⣯⣧⣿⣟⣻⣷⡿⣿⢹⣻⣿⢿⢻⣏⢿⣫⣩⣿⣽⡽⣿⣟⣵⣿⣿⡷⣽⣷⎥
⎢⣾⣿⣿⢯⣽⣯⡽⠿⡷⡿⡾⢿⢯⣿⣿⣽⡻⡽⢻⢿⣫⢯⣿⣭⡿⢿⢾⢿⡿⡿⢬⣿⣿⣿⢿⡿⣵⣿⣹⣝⎥
⎣⠵⡮⠾⣴⣿⡟⠿⠿⣭⣿⢿⢞⣞⠿⣛⢽⣾⣾⣞⡲⣿⣿⣷⡫⠿⢏⣽⡻⢻⣳⡻⢿⢫⣿⢷⣿⣗⢾⣿⣿⎦

This matrix is at the basis of the powerflow algorithm. At each iteration, the AutoDiff backend updates the nonzero values in the sparse Jacobian jx and solve the associated linear system to compute the next descent direction.

The procedure is implemented in the nlsolve! function, which uses a Newton-Raphson algorithm to solve the powerflow equations. The Newton-Raphson algorithm is specified as:

julia> pf_algo = NewtonRaphson(; verbose=1, tol=1e-10)NewtonRaphson(20, 1.0e-10, 1)

Then, we can solve the powerflow equations simply with

julia> convergence = ExaPF.nlsolve!(pf_algo, jx, stack; linear_solver=linear_solver)#it 0: 1.15103e+02
#it 1: 1.50328e+01
#it 2: 5.88242e-01
#it 3: 4.88493e-03
#it 4: 1.39924e-06
#it 5: 1.57368e-11
Power flow has converged: true
  * #iterations: 5
  * Time Jacobian (s) ........: 0.0083
  * Time linear solver (s) ...: 0.0035
     * update (s) ............: 0.0030
     * ldiv (s) ..............: 0.0005
  * Time total (s) ...........: 0.0125

Here, the algorithm solves the powerflow equations in 5 iterations. The algorithm modifies the values of stack inplace, to avoid any unnecessary memory allocations.

How to deport the computation on the GPU?

Now, how can we deport the resolution on the GPU? The procedure looks exactly the same. It suffices to initiate a new ExaPF.PolarForm object, but on the GPU:

julia> polar_gpu = ExaPF.PolarForm(pf, CUDABackend())Polar formulation (instantiated on device CUDA.CUDAKernels.CUDABackend(false, false))
Network characteristics:
    #buses:      1354  (#slack: 1  #PV: 259  #PQ: 1094)
    #generators: 260
    #lines:      1991
giving a mathematical formulation with:
    #controls:   519
    #states  :   2447

polar_gpu will load all the structures it needs on the GPU, to avoid unnecessary movements between the host and the device. We can load the other structures directly on the GPU with:

julia> stack_gpu = ExaPF.NetworkStack(polar_gpu)2968-elements NetworkStack{CUDA.CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}}
julia> basis_gpu = ExaPF.PolarBasis(polar_gpu)PolarBasis (AbstractExpression)
julia> pflow_gpu = ExaPF.PowerFlowBalance(polar_gpu) ∘ basis_gpuExaPF.ComposedExpressions{ExaPF.PolarBasis{CUDA.CuArray{Int64, 1}, CUDA.CUSPARSE.CuSparseMatrixCSR{Float64, Int32}}, ExaPF.PowerFlowBalance{CUDA.CuArray{Float64, 1}, CUDA.CUSPARSE.CuSparseMatrixCSR{Float64, Int32}}}(PolarBasis (AbstractExpression), PowerFlowBalance (AbstractExpression))
julia> jx_gpu = ExaPF.Jacobian(polar_gpu, pflow_gpu, mapx)A AutoDiff Jacobian for ExaPF.ComposedExpressions{ExaPF.PolarBasis{CUDA.CuArray{Int64, 1}, CUDA.CUSPARSE.CuSparseMatrixCSR{Float64, Int32}}, ExaPF.PowerFlowBalance{CUDA.CuArray{Float64, 1}, CUDA.CUSPARSE.CuSparseMatrixCSR{Float64, Int32}}}(PolarBasis (AbstractExpression), PowerFlowBalance (AbstractExpression)) Number of Jacobian colors: 25
julia> linear_solver = LS.DirectSolver(jx_gpu.J)ExaPF.LinearSolvers.DirectSolver{Nothing}(nothing)

Then, solving the powerflow equations on the GPU directly translates as

julia> convergence = ExaPF.nlsolve!(pf_algo, jx_gpu, stack_gpu; linear_solver=linear_solver)#it 0: 1.15103e+02
#it 1: 1.50328e+01
#it 2: 5.88242e-01
#it 3: 4.88493e-03
#it 4: 1.39924e-06
#it 5: 9.96622e-12
Power flow has converged: true
  * #iterations: 5
  * Time Jacobian (s) ........: 3.1312
  * Time linear solver (s) ...: 0.2993
     * update (s) ............: 0.0000
     * ldiv (s) ..............: 0.2993
  * Time total (s) ...........: 4.5016

Note that we get exactly the same iterations as when we solve the power flow equations on the CPU.

How to solve the linear system with BICGSTAB?

By default, the algorithm runs with a direct solver, which might be inefficient for large problems. To overcome this issue, ExaPF implements a wrapper for different iterative algorithms (GMRES, BICGSTAB).

The performance of iterative solvers is usually improved if we use a preconditioner. ExaPF.jl implements an overlapping Schwarz preconditioner, tailored for GPU usage. To build an instance with 8 blocks, just write

julia> npartitions = 8;
julia> jac_gpu = jx_gpu.J;
julia> precond = BlockJacobiPreconditioner(jac_gpu, npartitions, CUDABackend());ERROR: UndefVarError: `BlockJacobiPreconditioner` not defined

You can attach the preconditioner to an BICGSTAB algorithm simply as

julia> linear_solver = ExaPF.Bicgstab(jac_gpu; P=precond);ERROR: UndefVarError: `precond` not defined

(this will use the BICGSTAB algorithm implemented in Krylov.jl).

We need to update accordingly the tolerance of the Newton-Raphson algorithm (the iterative solver is less accurate than the direct solver):

julia> pf_algo = NewtonRaphson(; verbose=1, tol=1e-7)NewtonRaphson(20, 1.0e-7, 1)

We reset the variables to their initial values:

julia> ExaPF.init!(polar_gpu, stack_gpu)

Then, solving the power flow with the iterative solvers directly translates to one call to nlsolve!:

julia> convergence = ExaPF.nlsolve!(pf_algo, jx_gpu, stack_gpu; linear_solver=linear_solver)#it 0: 1.15103e+02
#it 1: 1.50328e+01
#it 2: 5.88242e-01
#it 3: 4.88493e-03
#it 4: 1.39924e-06
#it 5: 9.96622e-12
Power flow has converged: true
  * #iterations: 5
  * Time Jacobian (s) ........: 0.0042
  * Time linear solver (s) ...: 0.1512
     * update (s) ............: 0.0000
     * ldiv (s) ..............: 0.1512
  * Time total (s) ...........: 0.1580