:orphan: # PCMPIServerBegin starts a server that runs on the rank != 0 MPI processes waiting to process requests for parallel `KSP` solves and management of parallel `KSP` objects. ## Synopsis ``` PetscErrorCode PCMPIServerBegin(void) ``` Logically Collective on all MPI ranks except 0 ## Options Database Keys - ***-mpi_linear_solver_server -*** causes the PETSc program to start in MPI linear solver server mode where only the first MPI rank runs user code - ***-mpi_linear_solver_server_view -*** displays information about the linear systems solved by the MPI linear solver server ## Note This is normally started automatically in `PetscInitialize()` when the option is provided ## Developer Notes When called on rank zero this sets `PETSC_COMM_WORLD` to `PETSC_COMM_SELF` to allow a main program written with `PETSC_COMM_WORLD` to run correctly on the single rank while all the ranks (that would normally be sharing `PETSC_COMM_WORLD`) to run the solver server. Can this be integrated into the `PetscDevice` abstraction that is currently being developed? ## See Also `PCMPIServerEnd()`, `PCMPI` ## Level developer ## Location src/ksp/pc/impls/mpi/pcmpi.c --- [Edit on GitLab](https://gitlab.com/petsc/petsc/-/edit/release/src/ksp/pc/impls/mpi/pcmpi.c) [Index of all PC routines](index.md) [Table of Contents for all manual pages](/manualpages/index.md) [Index of all manual pages](/manualpages/singleindex.md)