Generic interface

LLᵀ and LLᴴ

LinearAlgebra.choleskyMethod
solver = cholesky(A::CuSparseMatrixCSR{T,INT}; view::Char='F')

Compute the LLᴴ factorization of a sparse matrix A on an NVIDIA GPU. The parameter type T is restricted to Float32, Float64, ComplexF32, or ComplexF64, while INT is restricted to Int32 or Int64.

A can also represent a batch of sparse matrices with a uniform sparsity pattern. The vectors A.rowPtr and A.colVal are identical to those of a single matrix, but the vector A.nzVal is a strided representation of the nonzeros of all matrices. We automatically detect whether we have a uniform batch or a single matrix by computing length(A.nzVal) ÷ length(A.colVal).

Input argument

  • A: a sparse Hermitian positive definite matrix stored in the CuSparseMatrixCSR format.

Keyword argument

*view: A character that specifies which triangle of the sparse matrix is provided. Possible options are L for the lower triangle, U for the upper triangle, and F for the full matrix.

Output argument

  • solver: Opaque structure CudssSolver that stores the factors of the LLᴴ decomposition.
source
LinearAlgebra.cholesky!Method
solver = cholesky!(solver::CudssSolver{T,INT}, A::CuSparseMatrixCSR{T,INT})

Compute the LLᴴ factorization of a sparse matrix A or a uniform batch of sparse matrices on an NVIDIA GPU, reusing the symbolic factorization stored in solver. The parameter type T is restricted to Float32, Float64, ComplexF32, or ComplexF64, while INT is restricted to Int32 or Int64.

source
Note

If we only store one triangle of A, we can also use the wrappers Symmetric and Hermitian instead of using the keyword argument view in cholesky. For real matrices, both wrappers are allowed but only Hermitian can be used for complex matrices.

H = Hermitian(A, :U)
F = cholesky(H)
using CUDA, CUDA.CUSPARSE
using CUDSS
using LinearAlgebra
using SparseArrays

T = ComplexF64
R = real(T)
n = 100
p = 5
A_cpu = sprand(T, n, n, 0.01)
A_cpu = A_cpu * A_cpu' + I
B_cpu = rand(T, n, p)

A_gpu = CuSparseMatrixCSR(A_cpu |> triu)
B_gpu = CuMatrix(B_cpu)
X_gpu = similar(B_gpu)

F = cholesky(A_gpu, view='U')
X_gpu = F \ B_gpu

R_gpu = B_gpu - CuSparseMatrixCSR(A_cpu) * X_gpu
norm(R_gpu)

# In-place LLᴴ
d_gpu = rand(R, n) |> CuVector
A_gpu = A_gpu + Diagonal(d_gpu)
cholesky!(F, A_gpu)

C_cpu = rand(T, n, p)
C_gpu = CuMatrix(C_cpu)
ldiv!(X_gpu, F, C_gpu)

R_gpu = C_gpu - ( CuSparseMatrixCSR(A_cpu) + Diagonal(d_gpu) ) * X_gpu
norm(R_gpu)

LDLᵀ and LDLᴴ

LinearAlgebra.ldltMethod
solver = ldlt(A::CuSparseMatrixCSR{T,INT}; view::Char='F')

Compute the LDLᴴ factorization of a sparse matrix A on an NVIDIA GPU. The parameter type T is restricted to Float32, Float64, ComplexF32, or ComplexF64, while INT is restricted to Int32 or Int64.

A can also represent a batch of sparse matrices with a uniform sparsity pattern. The vectors A.rowPtr and A.colVal are identical to those of a single matrix, but the vector A.nzVal is a strided representation of the nonzeros of all matrices. We automatically detect whether we have a uniform batch or a single matrix by computing length(A.nzVal) ÷ length(A.colVal).

Input argument

  • A: a sparse Hermitian matrix stored in the CuSparseMatrixCSR format.

Keyword argument

*view: A character that specifies which triangle of the sparse matrix is provided. Possible options are L for the lower triangle, U for the upper triangle, and F for the full matrix.

Output argument

  • solver: Opaque structure CudssSolver that stores the factors of the LDLᴴ decomposition.
source
LinearAlgebra.ldlt!Method
solver = ldlt!(solver::CudssSolver{T,INT}, A::CuSparseMatrixCSR{T,INT})

Compute the LDLᴴ factorization of a sparse matrix A or a uniform batch of sparse matrices on an NVIDIA GPU, reusing the symbolic factorization stored in solver. The parameter type T is restricted to Float32, Float64, ComplexF32, or ComplexF64, while INT is restricted to Int32 or Int64.

source
Note

If we only store one triangle of A_gpu, we can also use the wrappers Symmetric and Hermitian instead of using the keyword argument view in ldlt. For real matrices, both wrappers are allowed but only Hermitian can be used for complex matrices.

S = Symmetric(A, :L)
F = ldlt(S)
using CUDA, CUDA.CUSPARSE
using CUDSS
using LinearAlgebra
using SparseArrays

T = Float64
R = real(T)
n = 100
p = 5
A_cpu = sprand(T, n, n, 0.05) + I
A_cpu = A_cpu + A_cpu'
B_cpu = rand(T, n, p)

A_gpu = CuSparseMatrixCSR(A_cpu |> tril)
B_gpu = CuMatrix(B_cpu)
X_gpu = similar(B_gpu)

F = ldlt(A_gpu, view='L')
X_gpu = F \ B_gpu

R_gpu = B_gpu - CuSparseMatrixCSR(A_cpu) * X_gpu
norm(R_gpu)

# In-place LDLᵀ
d_gpu = rand(R, n) |> CuVector
A_gpu = A_gpu + Diagonal(d_gpu)
ldlt!(F, A_gpu)

C_cpu = rand(T, n, p)
C_gpu = CuMatrix(C_cpu)
ldiv!(X_gpu, F, C_gpu)

R_gpu = C_gpu - ( CuSparseMatrixCSR(A_cpu) + Diagonal(d_gpu) ) * X_gpu
norm(R_gpu)

LU

LinearAlgebra.luMethod
solver = lu(A::CuSparseMatrixCSR{T,INT})

Compute the LU factorization of a sparse matrix A on an NVIDIA GPU. The parameter type T is restricted to Float32, Float64, ComplexF32, or ComplexF64, while INT is restricted to Int32 or Int64.

A can also represent a batch of sparse matrices with a uniform sparsity pattern. The vectors A.rowPtr and A.colVal are identical to those of a single matrix, but the vector A.nzVal is a strided representation of the nonzeros of all matrices. We automatically detect whether we have a uniform batch or a single matrix by computing length(A.nzVal) ÷ length(A.colVal).

Input argument

  • A: a sparse square matrix stored in the CuSparseMatrixCSR format.

Output argument

  • solver: an opaque structure CudssSolver that stores the factors of the LU decomposition.
source
LinearAlgebra.lu!Method
solver = lu!(solver::CudssSolver{T,INT}, A::CuSparseMatrixCSR{T,INT})

Compute the LU factorization of a sparse matrix A or a uniform batch of sparse matrices on an NVIDIA GPU, reusing the symbolic factorization stored in solver. The parameter type T is restricted to Float32, Float64, ComplexF32, or ComplexF64, while INT is restricted to Int32 or Int64.

source
using CUDA, CUDA.CUSPARSE
using CUDSS
using LinearAlgebra
using SparseArrays

T = Float64
n = 100
A_cpu = sprand(T, n, n, 0.05) + I
b_cpu = rand(T, n)

A_gpu = CuSparseMatrixCSR(A_cpu)
b_gpu = CuVector(b_cpu)

F = lu(A_gpu)
x_gpu = F \ b_gpu

r_gpu = b_gpu - A_gpu * x_gpu
norm(r_gpu)

# In-place LU
d_gpu = rand(T, n) |> CuVector
A_gpu = A_gpu + Diagonal(d_gpu)
lu!(F, A_gpu)

c_cpu = rand(T, n)
c_gpu = CuVector(c_cpu)
ldiv!(x_gpu, F, c_gpu)

r_gpu = c_gpu - A_gpu * x_gpu
norm(r_gpu)