#include "petscsys.h" PetscErrorCode PetscHMPISpawn(PetscMPIInt nodesize)Not Collective (could make collective on MPI_COMM_WORLD, generate one huge comm and then split it up)
Comparison of two approaches for HMPI usage (MPI started with N processes)
-hmpi_spawn_size <n> requires MPI 2, results in n*N total processes with N directly used by application code
and n-1 worker processes (used by PETSc) for each application node.
You MUST launch MPI so that only ONE MPI process is created for each hardware node.
-hmpi_merge_size <n> results in N total processes, N/n used by the application code and the rest worker processes
(used by PETSc)
You MUST launch MPI so that n MPI processes are created for each hardware node.
petscmpiexec -n 2 ./ex1 -hmpi_spawn_size 3 gives 2 application nodes (and 4 PETSc worker nodes)
petscmpiexec -n 6 ./ex1 -hmpi_merge_size 3 gives the SAME 2 application nodes and 4 PETSc worker nodes
This is what would use if each of the computers hardware nodes had 3 CPUs.
These are intended to be used in conjunction with USER HMPI code. The user will have 1 process per
computer (hardware) node (where the computer node has p cpus), the user's code will use threads to fully
utilize all the CPUs on the node. The PETSc code will have p processes to fully use the compute node for
PETSc calculations. The user THREADS and PETSc PROCESSES will NEVER run at the same time so the p CPUs
are always working on p task, never more than p.
See PCHMPI for a PETSc preconditioner that can use this functionality
For both PetscHMPISpawn() and PetscHMPIMerge() PETSC_COMM_WORLD consists of one process per "node", PETSC_COMM_LOCAL_WORLD consists of all the processes in a "node."
In both cases the user's code is running ONLY on PETSC_COMM_WORLD (that was newly generated by running this command).
Level:developer
Location:src/sys/objects/mpinit.c
Index of all Sys routines
Table of Contents for all manual pages
Index of all manual pages