:orphan:
# MatGetMultiProcBlock
Create multiple 'parallel submatrices' from a given `Mat`. Each submatrix can span multiple procs.
## Synopsis
```
#include "petscmat.h"
PetscErrorCode MatGetMultiProcBlock(Mat mat, MPI_Comm subComm, MatReuse scall, Mat *subMat)
```
Collective
## Input Parameters
- ***mat -*** the matrix
- ***subcomm -*** the sub communicator obtained as if by `MPI_Comm_split(PetscObjectComm((PetscObject)mat))`
- ***scall -*** either `MAT_INITIAL_MATRIX` or `MAT_REUSE_MATRIX`
## Output Parameter
- ***subMat -*** parallel sub-matrices each spanning a given `subcomm`
## Notes
The submatrix partition across processors is dictated by `subComm` a
communicator obtained by `MPI_comm_split()` or via `PetscSubcommCreate()`. The `subComm`
is not restricted to be grouped with consecutive original ranks.
Due the `MPI_Comm_split()` usage, the parallel layout of the submatrices
map directly to the layout of the original matrix [wrt the local
row,col partitioning]. So the original 'DiagonalMat' naturally maps
into the 'DiagonalMat' of the `subMat`, hence it is used directly from
the `subMat`. However the offDiagMat looses some columns - and this is
reconstructed with `MatSetValues()`
This is used by `PCBJACOBI` when a single block spans multiple MPI ranks
## See Also
[](ch_matrices), `Mat`, `MatCreateRedundantMatrix()`, `MatCreateSubMatrices()`, `PCBJACOBI`
## Level
advanced
## Location
src/mat/interface/matrix.c
## Implementations
MatGetMultiProcBlock_MPIAIJ in src/mat/impls/aij/mpi/mpb_aij.c
MatGetMultiProcBlock_SeqAIJ in src/mat/impls/aij/seq/aij.c
MatGetMultiProcBlock_MPIBAIJ in src/mat/impls/baij/mpi/mpb_baij.c
---
[Edit on GitLab](https://gitlab.com/petsc/petsc/-/edit/release/src/mat/interface/matrix.c)
[Index of all Mat routines](index.md)
[Table of Contents for all manual pages](/manualpages/index.md)
[Index of all manual pages](/manualpages/singleindex.md)