PetscSFPattern#
Pattern of the PetscSF
graph
Synopsis#
typedef enum {
PETSCSF_PATTERN_GENERAL = 0,
PETSCSF_PATTERN_ALLGATHER,
PETSCSF_PATTERN_GATHER,
PETSCSF_PATTERN_ALLTOALL
} PetscSFPattern;
Values#
PETSCSF_PATTERN_GENERAL
- A general graph. One sets the graph withPetscSFSetGraph()
and usually does not use this enum directly.PETSCSF_PATTERN_ALLGATHER
- A graph that every rank gathers all roots from all ranks (likeMPI_Allgather()
). One sets the graph withPetscSFSetGraphWithPattern()
.PETSCSF_PATTERN_GATHER
- A graph that rank 0 gathers all roots from all ranks (likeMPI_Gatherv()
with root=0). One sets the graph withPetscSFSetGraphWithPattern()
.PETSCSF_PATTERN_ALLTOALL
- A graph that every rank gathers different roots from all ranks (likeMPI_Alltoall()
). One sets the graph withPetscSFSetGraphWithPattern()
. In an ALLTOALL graph, we assume each process hasleaves and roots, with each leaf connecting to a remote root. Here is the size of the communicator. This does not mean one can not communicate multiple data items between a pair of processes. One just needs to create a new MPI datatype for the multiple data items, e.g., by MPI_Type_contiguous
.
See Also#
Level#
beginner
Location#
Index of all PetscSF routines
Table of Contents for all manual pages
Index of all manual pages