:orphan:
# MatCreateAIJCUSPARSE
Creates a sparse matrix in `MATAIJCUSPARSE` (compressed row) format (the default parallel PETSc format). This matrix will ultimately pushed down to NVIDIA GPUs and use the CuSPARSE library for calculations.
## Synopsis
```
#include "petscmat.h"
PetscErrorCode MatCreateAIJCUSPARSE(MPI_Comm comm, PetscInt m, PetscInt n, PetscInt M, PetscInt N, PetscInt d_nz, const PetscInt d_nnz[], PetscInt o_nz, const PetscInt o_nnz[], Mat *A)
```
Collective
## Input Parameters
- ***comm -*** MPI communicator, set to `PETSC_COMM_SELF`
- ***m -*** number of local rows (or `PETSC_DECIDE` to have calculated if `M` is given)
This value should be the same as the local size used in creating the
y vector for the matrix-vector product y = Ax.
- ***n -*** This value should be the same as the local size used in creating the
x vector for the matrix-vector product y = Ax. (or PETSC_DECIDE to have
calculated if `N` is given) For square matrices `n` is almost always `m`.
- ***M -*** number of global rows (or `PETSC_DETERMINE` to have calculated if `m` is given)
- ***N -*** number of global columns (or `PETSC_DETERMINE` to have calculated if `n` is given)
- ***d_nz -*** number of nonzeros per row in DIAGONAL portion of local submatrix
(same value is used for all local rows)
- ***d_nnz -*** array containing the number of nonzeros in the various rows of the
DIAGONAL portion of the local submatrix (possibly different for each row)
or `NULL`, if `d_nz` is used to specify the nonzero structure.
The size of this array is equal to the number of local rows, i.e `m`.
For matrices you plan to factor you must leave room for the diagonal entry and
put in the entry even if it is zero.
- ***o_nz -*** number of nonzeros per row in the OFF-DIAGONAL portion of local
submatrix (same value is used for all local rows).
- ***o_nnz -*** array containing the number of nonzeros in the various rows of the
OFF-DIAGONAL portion of the local submatrix (possibly different for
each row) or `NULL`, if `o_nz` is used to specify the nonzero
structure. The size of this array is equal to the number
of local rows, i.e `m`.
## Output Parameter
- ***A -*** the matrix
## Notes
It is recommended that one use the `MatCreate()`, `MatSetType()` and/or `MatSetFromOptions()`,
MatXXXXSetPreallocation() paradigm instead of this routine directly.
[MatXXXXSetPreallocation() is, for example, `MatSeqAIJSetPreallocation()`]
The AIJ format, also called the
compressed row storage), is fully compatible with standard Fortran
storage. That is, the stored row and column indices can begin at
either one (as in Fortran) or zero.
## See Also
[](ch_matrices), `Mat`, `MATAIJCUSPARSE`, `MatCreate()`, `MatCreateAIJ()`, `MatSetValues()`, `MatSeqAIJSetColumnIndices()`, `MatCreateSeqAIJWithArrays()`, `MatCreateAIJ()`, `MATMPIAIJCUSPARSE`, `MATAIJCUSPARSE`
## Level
intermediate
## Location
src/mat/impls/aij/mpi/mpicusparse/mpiaijcusparse.cu
## Examples
src/mat/tutorials/ex5cu.cu
---
[Edit on GitLab](https://gitlab.com/petsc/petsc/-/edit/release/src/mat/impls/aij/mpi/mpicusparse/mpiaijcusparse.cu)
[Index of all Mat routines](index.md)
[Table of Contents for all manual pages](/manualpages/index.md)
[Index of all manual pages](/manualpages/singleindex.md)