NLPModels
Knowledgeable users may have noticed that the Argos.AbstractNLPEvaluator
API is closed to NLPModels. Hence, it is straightforward to wrap any AbstractNLPEvaluator
in a NLPModels.AbstractNLPModel
structure.
Initializing
In Argos, this is provided by the OPFModel
structure, which takes as input any AbstractNLPEvaluator
and converts it as an NLPModels
.
AbstractNLPModel`.
using NLPModels
using Argos
# Import OPF model in Argos
datafile = joinpath(INSTANCES_DIR, "case57.m")
flp = Argos.FullSpaceEvaluator(datafile)
# Convert it to a AbstractNLPModel:
model = Argos.OPFModel(flp)
@assert isa(model, NLPModels.AbstractNLPModel)
Playing with NLPModels' API
The user can use the standard NLPModels API to interact with the OPF model:
- Querying the number of variables:
julia> n = NLPModels.get_nvar(model)
119
- Querying the initial variable
x0
:
julia> x0 = NLPModels.get_x0(model)
119-element Vector{Float64}: -0.020594885173533087 -0.10419615634406146 -0.15097098029750955 -0.07766715171374768 -0.1668534764906579 -0.18256143975860686 -0.12775810124598497 -0.14870205226991687 -0.1322959573011702 -0.19949113350295183 ⋮ 1.005 0.98 1.015 0.0 0.4 0.0 4.5 0.0 3.1
- Evaluating the objective
julia> NLPModels.obj(model, x0)
51272.87033037438
- Evaluating the constraints
julia> NLPModels.cons(model, x0)
274-element Vector{Float64}: 0.003022401147809717 0.0030443460146013512 0.0032197933906044085 0.0033636219110988463 0.002076154300374 0.0001295163545154132 0.0020171602977807623 -0.003531140039454428 -0.0030640689180172487 0.0015899046184775223 ⋮ 0.14159347915067355 0.002641010671943805 0.0031807469095941577 0.0004765613725646054 0.0021715752979237354 0.0008227688607001616 0.012748847855148328 0.06416762549876907 0.04544570155429488
- Evaluating the gradient
julia> NLPModels.grad(model, x0)
119-element Vector{Float64}: -326563.85442535847 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 ⋮ 0.0 0.0 0.0 4000.0 4000.0 4000.0 3999.999998 4000.0 3999.999999
and so on...
Accelerating the callbacks on an NVIDIA GPU
We can exploit any available NVIDIA GPU to accelerate the evaluation of the derivatives. To do so, one first needs to install ArgosCUDA
](../quickstart/cuda.md).
Then, we can instantiate a new evaluator on the GPU with:
using ArgosCUDA, CUDAKernels
flp = Argos.FullSpaceEvaluator(datafile; device=CUDADevice())
The OPFModel
structure works exclusively on the host memory, so we have to bridge the evaluator flp
to the host before creating a new instance of OPFModel
:
brige = Argos.bridge(flp)
model = Argos.OPFModel(bridge)
Bridging an evaluator between the host and the device induces significant data movements between the host and the device, as for each input or for each output we have to move the data back and forth between the host and the device. However, we have noticed that in practice the time to operate the data transfer is negligible compared to the other operations (linear algebra, KKT system solution) pursued inside the optimization algorithm.