Actual source code: ex4f90.F90
petsc-3.3-p7 2013-05-11
1: !
2: ! Program usage: mpiexec -n 2 ex4f [-help] [all PETSc options]
3: !
4: ! This introductory example illustrates running PETSc on a subset
5: ! of processes
6: !
7: !/*T
8: ! Concepts: introduction to PETSc;
9: ! Concepts: process^subset set PETSC_COMM_WORLD
10: ! Processors: 2
11: !T*/
12: ! -----------------------------------------------------------------------
14: program main
15: use petscsys
16: implicit none
18: integer ierr
19: integer rank, size
21: ! We must call MPI_Init() first, making us, not PETSc, responsible
22: ! for MPI
24: call MPI_Init(ierr)
26: ! We can now change the communicator universe for PETSc
28: call MPI_Comm_rank(MPI_COMM_WORLD,rank,ierr)
29: call MPI_Comm_split(MPI_COMM_WORLD,mod(rank,2),0, &
30: & PETSC_COMM_WORLD,ierr)
32: ! Every PETSc routine should begin with the PetscInitialize()
33: ! routine.
35: call PetscInitialize(PETSC_NULL_CHARACTER,ierr)
37: ! The following MPI calls return the number of processes being used
38: ! and the rank of this process in the group.
40: call MPI_Comm_size(PETSC_COMM_WORLD,size,ierr)
41: call MPI_Comm_rank(PETSC_COMM_WORLD,rank,ierr)
44: ! Here we would like to print only one message that represents all
45: ! the processes in the group.
46: if (rank .eq. 0) write(6,100) size,rank
47: 100 format("No of Procs = ",i4," rank = ",i4)
49: ! Always call PetscFinalize() before exiting a program. This
50: ! routine - finalizes the PETSc libraries as well as MPI - provides
51: ! summary and diagnostic information if certain runtime options are
52: ! chosen (e.g., -log_summary). See PetscFinalize() manpage for more
53: ! information.
55: call PetscFinalize(ierr)
57: ! Since we initialized MPI, we must call MPI_Finalize()
59: call MPI_Finalize(ierr)
60: end