#include "petscvec.h" PetscErrorCode VecScatterCreate(Vec xin,IS ix,Vec yin,IS iy,VecScatter *newctx)Collective on Vec
xin | - a vector that defines the shape (parallel data layout of the vector) of vectors from which we scatter | |
yin | - a vector that defines the shape (parallel data layout of the vector) of vectors to which we scatter | |
ix | - the indices of xin to scatter (if NULL scatters all values) | |
iy | - the indices of yin to hold results (if NULL fills entire vector yin) |
-vecscatter_view | - Prints detail of communications | |
-vecscatter_view ::ascii_info | - Print less details about communication | |
-vecscatter_merge | - VecScatterBegin() handles all of the communication, VecScatterEnd() is a nop eliminates the chance for overlap of computation and communication | |
-vecscatter_packtogether | - Pack all messages before sending, receive all messages before unpacking will make the results of scatters deterministic when otherwise they are not (it may be slower also). | |
-vecscatter_type sf | - Use the PetscSF implementation of vecscatter (Default). One can use PetscSF options to control the communication. | |
-vecscatter_packongpu | - For GPU vectors, pack needed entries on GPU, then copy packed data to CPU, then do MPI. Otherwise, we might copy a segment encompassing needed entries. Default is TRUE. |
Currently the MPI_Send() use PERSISTENT versions. (this unfortunately requires that the same in and out arrays be used for each use, this is why we always need to pack the input into the work array before sending and unpack upon receiving instead of using MPI datatypes to avoid the packing/unpacking).
Both ix and iy cannot be NULL at the same time.
Use VecScatterCreateToAll() to create a vecscatter that copies an MPI vector to sequential vectors on all MPI ranks. Use VecScatterCreateToZero() to create a vecscatter that copies an MPI vector to a sequential vector on MPI rank 0. These special vecscatters have better performance than general ones.