#include "petscsf.h" PetscErrorCode PetscSFSetGraphWithPattern(PetscSF sf,PetscLayout map,PetscSFPattern pattern)Collective
sf | - The PetscSF | |
map | - Layout of roots over all processes (insignificant when pattern is PETSCSF_PATTERN_ALLTOALL) | |
pattern | - One of PETSCSF_PATTERN_ALLGATHER, PETSCSF_PATTERN_GATHER, PETSCSF_PATTERN_ALLTOALL |
With PETSCSF_PATTERN_ALLGATHER, the routine creates a graph that if one does Bcast on it, it will copy x to sequential vectors y on all ranks.
With PETSCSF_PATTERN_GATHER, the routine creates a graph that if one does Bcast on it, it will copy x to a sequential vector y on rank 0.
In above cases, entries of x are roots and entries of y are leaves.
With PETSCSF_PATTERN_ALLTOALL, map is insignificant. Suppose NP is size of sf's communicator. The routine creates a graph that every rank has NP leaves and NP roots. On rank i, its leaf j is connected to root i of rank j. Here 0 <=i,j<NP. It is a kind of MPI_Alltoall with sendcount/recvcount being 1. Note that it does not mean one can not send multiple items. One just needs to create a new MPI datatype for the mulptiple data items with MPI_Type_contiguous() and use that as the <unit> argument in SF routines.
In this case, roots and leaves are symmetric.