Configuring PETSc FAQ

Important

Please obtain PETSc via the repository or download the latest patched tarball. See download documentation for more information.

See quick-start tutorial for a step-by-step walk-through of the installation process.

Common Example Usages

Attention

There are many example configure scripts at config/examples/*.py. These cover a wide variety of systems, and we use some of these scripts locally for testing. One can modify these files and run them in lieu of writing one yourself. For example:

> ./config/examples/arch-ci-osx-dbg.py

If there is a system for which we do not yet have such a configure script and/or the script in the examples directory is outdated we welcome your feedback by submitting your recommendations to petsc-maint@mcs.anl.gov. See bug report documentation for more information.

  • If you do not have a Fortran compiler or MPICH installed locally (and want to use PETSc from C only).

    > ./configure --with-cc=gcc --with-cxx=0 --with-fc=0 --download-f2cblaslapack --download-mpich
    
  • Same as above - but install in a user specified (prefix) location.

    > ./configure --prefix=/home/user/soft/petsc-install --with-cc=gcc --with-cxx=0 --with-fc=0 --download-f2cblaslapack --download-mpich
    
  • If BLAS/LAPACK, MPI sources (in “-devel” packages in most Linux distributions) are already installed in default system/compiler locations and mpicc, mpif90, mpiexec are available via $PATH - configure does not require any additional options.

    > ./configure
    
  • If BLAS/LAPACK, MPI are already installed in known user location use:

    > ./configure --with-blaslapack-dir=/usr/local/blaslapack --with-mpi-dir=/usr/local/mpich
    

    or

    > ./configure --with-blaslapack-dir=/usr/local/blaslapack --with-cc=/usr/local/mpich/bin/mpicc --with-mpi-f90=/usr/local/mpich/bin/mpif90 --with-mpiexec=/usr/local/mpich/bin/mpiexec
    

Note

Do not specify --with-cc, --with-fc etc for the above when using --with-mpi-dir - so that mpicc/ mpif90 can be picked up from mpi-dir!

  • Build Complex version of PETSc (using c++ compiler):

    > ./configure --with-cc=gcc --with-fc=gfortran --with-cxx=g++ --with-clanguage=cxx --download-fblaslapack --download-mpich --with-scalar-type=complex
    
  • Install 2 variants of PETSc, one with gnu, the other with Intel compilers. Specify different $PETSC_ARCH for each build. See multiple PETSc install documentation for further recommendations:

    > ./configure PETSC_ARCH=linux-gnu --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-mpich
    > make PETSC_ARCH=linux-gnu all test
    > ./configure PETSC_ARCH=linux-gnu-intel --with-cc=icc --with-cxx=icpc --with-fc=ifort --download-mpich --with-blaslapack-dir=/usr/local/mkl
    > make PETSC_ARCH=linux-gnu-intel all test
    

Compilers

Important

If no compilers are specified - configure will automatically look for available MPI or regular compilers in the user’s $PATH in the following order:

  1. mpicc/mpicxx/mpif90

  2. gcc/g++/gfortran

  3. cc/CC etc..

  • Specify compilers using the options --with-cc/--with-cxx/--with-fc for c, c++, and fortran compilers respectively:

    > ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran
    

Important

It’s best to use MPI compilers as this will avoid the situation where MPI is compiled with one set of compilers (like gcc/gfortran) and user specified incompatible compilers to PETSc (perhaps icc/ifort). This can be done by either specifying --with-cc=mpicc or --with-mpi-dir (and not --with-cc=gcc)

> ./configure --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90

or the following (but without --with-cc=gcc)

> ./configure --with-mpi-dir=/opt/mpich2-1.1
  • If a fortran compiler is not available or not needed - disable using:

    > ./configure --with-fc=0
    
  • If a c++ compiler is not available or not needed - disable using:

    > ./configure --with-cxx=0
    

configure defaults to building PETSc in debug mode. One can switch to optimized mode with the configure option --with-debugging=0 (we suggest using a different $PETSC_ARCH for debug and optimized builds, for example arch-debug and arch-opt, this way you can switch between debugging your code and running for performance by simply changing the value of $PETSC_ARCH). See multiple install documentation for further details.

Additionally one can specify more suitable optimization flags with the options COPTFLAGS, FOPTFLAGS, CXXOPTFLAGS. For example when using gnu compilers with corresponding optimization flags:

> ./configure --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --with-debugging=0 COPTFLAGS='-O3 -march=native -mtune=native' CXXOPTFLAGS='-O3 -march=native -mtune=native' FOPTFLAGS='-O3 -march=native -mtune=native' --download-mpich

Warning

configure cannot detect compiler libraries for certain set of compilers. In this case one can specify additional system/compiler libraries using the LIBS option:

> ./configure --LIBS='-ldl /usr/lib/libm.a'

External Packages

Note

BLAS/LAPACK is the only required external package (other than of course build tools such as compilers and make). PETSc may be built and run without MPI support if processing only in serial.

For any external packages used with PETSc we highly recommend you have PETSc download and install the packages, rather than you installing them separately first. This insures that:

  • The packages are installed with the same compilers and compiler options as PETSc so that they can work together.

  • A compatible version of the package is installed. A generic install of this package might not be compatible with PETSc (perhaps due to version differences - or perhaps due to the requirement of additional patches for it to work with PETSc).

  • Some packages have bug fixes, portability patches, and upgrades for dependent packages that have not yet been included in an upstream release, and hence may not play nice with PETSc.

PETSc provides interfaces to various external packages. One can optionally use external solvers like HYPRE, MUMPS, and others from within PETSc applications.

PETSc configure has the ability to download and install these external packages. Alternatively if these packages are already installed, then configure can detect and use them.

If you are behind a firewall and cannot use a proxy for the downloads or have a very slow network, use the additional option --with-packages-download-dir=/path/to/dir. This will trigger configure to print the URLs of all the packages you must download. You may then download the packages to some directory (do not uncompress or untar the files) and then point configure to these copies of the packages instead of trying to download them directly from the internet.

The following modes can be used to download/install external packages with configure.

  • --download-PACKAGENAME: Download specified package and install it, enabling PETSc to use this package. This is the recommended method to couple any external packages with PETSc:

    > ./configure --download-fblaslapack --download-mpich
    
  • --download-PACKAGENAME=/path/to/PACKAGENAME.tar.gz: If configure cannot automatically download the package (due to network/firewall issues), one can download the package by alternative means (perhaps wget, curl, or scp via some other machine). Once the tarfile is downloaded, the path to this file can be specified to configure with this option. configure will proceed to install this package and then configure PETSc with it:

    > ./configure --download-mpich=/home/petsc/mpich2-1.0.4p1.tar.gz
    
  • --with-PACKAGENAME-dir=/path/to/dir: If the external package is already installed - specify its location to configure (it will attempt to detect and include relevant library files from this location). Normally this corresponds to the top-level installation directory for the package:

    > ./configure --with-mpi-dir=/home/petsc/software/mpich2-1.0.4p1
    
  • --with-PACKAGENAME-include=/path/to/include/dir and --with-PACKAGENAME-lib=LIBRARYLIST: Usually a package is defined completely by its include file location and library list. If the package is already installed one can use these two options to specify the package to configure. For example:

    > ./configure --with-superlu-include=/home/petsc/software/superlu/include --with-superlu-lib=/home/petsc/software/superlu/lib/libsuperlu.a
    

    or

    > ./configure --with-parmetis-include=/sandbox/balay/parmetis/include --with-parmetis-lib="-L/sandbox/balay/parmetis/lib -lparmetis -lmetis"
    

    or

    > ./configure --with-parmetis-include=/sandbox/balay/parmetis/include --with-parmetis-lib=[/sandbox/balay/parmetis/lib/libparmetis.a,libmetis.a]
    

Note

  • Run ./configure --help to get the list of external packages and corresponding additional options (for example --with-mpiexec for MPICH).

  • Generally one would use either one of the above installation modes for any given package - and not mix these. (i.e combining --with-mpi-dir and --with-mpi-include etc. should be avoided).

  • Some packages might not support certain options like --download-PACKAGENAME or --with-PACKAGENAME-dir. Architectures like Microsoft Windows might have issues with these options. In these cases, --with-PACKAGENAME-include and --with-PACKAGENAME-lib options should be preferred.

  • If you want to download a compatible external package manually, then the URL for this package is listed in configure source for this package. For example, check config/BuildSystem/config/packages/SuperLU.py for the url for download this package.

  • --with-packages-build-dir=PATH: By default, external packages will be unpacked and the build process is run in $PETSC_DIR/$PETSC_ARCH/externalpackages. However one can choose a different location where these packages are unpacked and the build process is run.

BLAS/LAPACK

These packages provide some basic numeric kernels used by PETSc. configure will automatically look for BLAS/LAPACK in certain standard locations, on most systems you should not need to provide any information about BLAS/LAPACK in the configure command.

One can use the following options to let configure download/install BLAS/LAPACK automatically:

  • When fortran compiler is present:

    > ./configure --download-fblaslapack
    
  • Or when configuring without a Fortran compiler - i.e --with-fc=0:

    > ./configure --download-f2cblaslapack
    

Alternatively one can use other options like one of the following:

> ./configure --with-blaslapack-lib=libsunperf.a
> ./configure --with-blas-lib=libblas.a --with-lapack-lib=liblapack.a
> ./configure --with-blaslapack-dir=/soft/com/packages/intel/13/079/mkl

Intel MKL

Intel provides BLAS/LAPACK via the MKL library. One can specify it to PETSc configure with --with-blaslapack-dir=$MKLROOT or --with-blaslapack-dir=/soft/com/packages/intel/13/079/mkl. If the above option does not work - one could determine the correct library list for your compilers using Intel MKL Link Line Advisor and specify with the configure option --with-blaslapack-lib

IBM ESSL

Sadly, IBM’s ESSL does not have all the routines of BLAS/LAPACK that some packages, such as SuperLU expect; in particular slamch, dlamch and xerbla. In this case instead of using ESSL we suggest --download-fblaslapack. If you really want to use ESSL, see https://www.pdc.kth.se/hpc-services.

MPI Problems/I Don’t Want MPI

The Message Passing Interface (MPI) provides the parallel functionality for PETSc.

configure will automatically look for MPI compilers mpicc/mpif90 etc and use them if found in your PATH. One can use the following options to let configure download/install MPI automatically:

  • For MPICH:

    > ./configure --download-mpich
    
  • For OpenMPI:

    > ./configure --download-openmpi
    

Using MPI Compilers

It’s best to install PETSc with MPI compiler wrappers (often called mpicc, mpicxx, mpif90) - this way, the SAME compilers used to build MPI are used to build PETSc. See the section on compilers above for more details.

  • Vendor provided MPI might already be installed. IBM, SGI, Cray etc provide their own:

    > ./configure --with-cc=vendor_mpicc --with-fc=vendor_mpif90
    
  • If using MPICH which is already installed (perhaps using myrinet/gm) then use (without specifying --with-cc=gcc etc. so that configure picks up mpicc from mpi-dir):

    >  ./configure --with-mpi-dir=/absolute/path/to/mpich/install
    

Installing Without MPI

You can build (sequential) PETSc without MPI. This is useful for quickly installing PETSc:

> ./configure --with-mpi=0

However - if there is any MPI code in user application, then its best to install a full MPI implementation - even if the usage is currently limited to uniprocessor mode:

Installing With Open MPI With Shared MPI Libraries

OpenMPI defaults to building shared libraries for MPI. However, the binaries generated by MPI wrappers mpicc/mpif90 etc. require $LD_LIBRARY_PATH to be set to the location of these libraries.

Due to this OpenMPI restriction one has to set $LD_LIBRARY_PATH correctly (per OpenMPI installation instructions), before running PETSc configure. If you do not set this environmental variables you will get messages when running configure such as:

UNABLE to EXECUTE BINARIES for config/configure.py
-------------------------------------------------------------------------------
Cannot run executables created with C. If this machine uses a batch system
to submit jobs you will need to configure using/configure.py with the additional option --with-batch.
Otherwise there is problem with the compilers. Can you compile and run code with your C/C++ (and maybe Fortran) compilers?

or when running a code compiled with OpenMPI:

error while loading shared libraries: libmpi.so.0: cannot open shared object file: No such file or directory

Installation Location: In-place or Out-of-place

By default, PETSc does an in-place installation, meaning the libraries are kept in the same directories used to compile PETSc. This is particularly useful for those application developers who follow the PETSc git repository main or release branches since rebuilds for updates are very quick and painless.

Note

The libraries and include files are located in $PETSC_DIR/$PETSC_ARCH/lib and $PETSC_DIR/$PETSC_ARCH/include

Out-of-place Installation With --prefix

To install the libraries and include files in another location use the --prefix option

> ./configure --prefix=/home/userid/my-petsc-install --some-other-options

The libraries and include files will be located in /home/userid/my-petsc-install/lib and /home/userid/my-petsc-install/include.

Installs For Package Managers: Using DESTDIR (Very uncommon)

> ./configure --prefix=/opt/petsc/my-root-petsc-install
> make
> make install DESTDIR=/tmp/petsc-pkg

Package up /tmp/petsc-pkg. The package should then be installed at /opt/petsc/my-root-petsc-install

Multiple Installs Using --prefix (See DESTDIR)

Specify a different --prefix location for each configure of different options - at configure time. For example:

> ./configure --prefix=/opt/petsc/petsc-3.15.0-mpich --with-mpi-dir=/opt/mpich
> make
> make install [DESTDIR=/tmp/petsc-pkg]
> ./configure --prefix=/opt/petsc/petsc-3.15.0-openmpi --with-mpi-dir=/opt/openmpi
> make
> make install [DESTDIR=/tmp/petsc-pkg]

In-place Installation

The PETSc libraries and generated included files are placed in the sub-directory off the current directory $PETSC_ARCH which is either provided by the user with, for example:

> export PETSC_ARCH=arch-debug
> ./configure
> make
> export PETSC_ARCH=arch-opt
> ./configure --some-optimization-options
> make

or

> ./configure PETSC_ARCH=arch-debug
> make
> ./configure --some-optimization-options PETSC_ARCH=arch-opt
> make

If not provided configure will generate a unique value automatically (for in-place non --prefix configurations only).

> ./configure
> make
> ./configure --with-debugging=0
> make

Produces the directories (on an Apple MacOS machine) $PETSC_DIR/arch-darwin-c-debug and $PETSC_DIR/arch-darwin-c-opt.

Installing On Machine Requiring Cross Compiler Or A Job Scheduler

On systems where you need to use a job scheduler or batch submission to run jobs use the configure option --with-batch. On such systems the make check option will not work.

  • You must first ensure you have loaded appropriate modules for the compilers etc that you wish to use. Often the compilers are provided automatically for you and you do not need to provide --with-cc=XXX etc. Consult with the documentation and local support for such systems for information on these topics.

  • On such systems you generally should not use --with-blaslapack-dir or --download-fblaslapack since the systems provide those automatically (sometimes appropriate modules must be loaded first).

  • Some package’s --download-package options do not work on these systems, for example HDF5. Thus you must use modules to load those packages and --with-package to configure with the package.

  • Since building external packages on these systems is often troublesome and slow we recommend only installing PETSc with those configuration packages that you need for your work, not extras.

Installing With TAU Instrumentation Package

TAU package and the prerequisite PDT packages need to be installed separately (perhaps with MPI). Now use tau_cc.sh as compiler to PETSc configure:

> export TAU_MAKEFILE=/home/balay/soft/linux64/tau-2.20.3/x86_64/lib/Makefile.tau-mpi-pdt
> ./configure CC=/home/balay/soft/linux64/tau-2.20.3/x86_64/bin/tau_cc.sh --with-fc=0 PETSC_ARCH=arch-tau

Installing PETSc To Use GPUs And Accelerators

PETSc is able to take adavantage of GPU’s and certain accelerator libraries, however some require additional configure options.

CUDA

Important

An NVIDIA GPU is required to use CUDA-accelerated code. Check that your machine has a CUDA enabled GPU by consulting https://developer.nvidia.com/cuda-gpus.

On Linux - make sure you have compatible NVIDIA driver installed.

On Windows - Use either Cygwin or WSL the latter of which is entirely untested right now. If you have experience with WSL and/or have successfully built PETSc on windows for use with CUDA we welcome your input at petsc-maint@mcs.anl.gov. See the bug-reporting documentation for more details.

In most cases you need only pass the configure option --with-cuda; check config/examples/arch-ci-linux-cuda-double.py for example usage.

CUDA build of PETSc currently works on Mac OS X, Linux, Microsoft Windows with Cygwin.

Examples that use CUDA have the suffix .cu; see $PETSC_DIR/src/snes/tutorials/ex47.cu

Kokkos

In most cases you need only pass the configure option --download-kokkos and one of --with-cuda, --with-openmp, or --with-pthread (or nothing to use sequential Kokkos). See the CUDA installation documenation, OpenMPI installation documentation for further reference on their respective requirements.

Examples that use Kokkos have the suffix .kokkos.cxx; see src/snes/tutorials/ex3k.kokkos.cxx

OpenCL/ViennaCL

Requires the OpenCL shared library, which is shipped in the vendor graphics driver and the OpenCL headers; if needed you can download them from the Khronos Group directly. Package managers on Linux provide these headers through a package named ‘opencl-headers’ or similar. On Apple systems the OpenCL drivers and headers are always available and do not need to be downloaded.

Always make sure you have the latest GPU driver installed. There are several known issues with older driver versions.

Run configure with --download-viennacl; check config/examples/arch-ci-linux-viennacl.py for example usage.

OpenCL/ViennaCL builds of PETSc currently work on Mac OS X, Linux, and Microsoft Windows.

Installing On Large Scale DOE Systems

NERSC - CORI machine

  • Project ID: m3353

  • PI: Richard Mills

  • Notes on usage:

ALCF - Argonne National Laboratory - theta machine - Intel KNL based system

  • Project ID:

  • PI:

  • Notes on usage:

    • Log into theta.alcf.anl.gov (Use crypto card or MobilePass app for the password)

    • There are three compiler suites Modules

      • module load PrgEnv-intel intel

      • module load PrgEnv-gnu gcc/7.1.0/

      • module load PrgEnv-cray

    • List currently loaded modules: module list

    • List all available modules: module avail

    • BLAS/LAPACK will automatically be found so you do not need to provide it

      • It is best not to use built-in modules for external packages (except BLAS/LAPACK because they are often buggy. Most external packages can be built using the --download-packagename option with the intel or Gnu environment but not cray

      • You can use config/examples/arch-cray-xc40-knl-opt.py as a template for running configure but it is outdated

      • When using the Intel module you may need to use --download-sowing-cc=icc, --download-sowing-cxx=icpc, --download-sowing-cpp="icc E", --download-sowing-cxxpp="icpc -E" since the GNU compilers may not work as they access Intel files

      • To get an interactive node use qsub -A CSC250STMS07 -n 1 -t 60 -q debug-flat-quad -I

      • To run on interactive node using two MPI ranks use aprun -n 2 ./program options

ALCF - Argonne National Laboratory - thetagpu machine - AMD CPUs with NVIDIA GPUs

Notes on usage:

  • Log into theta.alcf.anl.gov

  • The GPU front-end and compute nodes do not support git via ssh - so best to use git clone/fetch etc. (in PETSc clone) on theta.alcf.anl.gov

  • ssh thetagpusn1 (this is the GPU front end)

  • export http_proxy=http://proxy.tmi.alcf.anl.gov:3128

  • export https_proxy=http://proxy.tmi.alcf.anl.gov:3128

  • module load nvhpc (Do not module load any MPI)

  • module load libtool-2.4.6-gcc-7.5.0-jdxbjft cmake-3.20.2-gcc-10.2.0-wmku2nn

  • ./configure --with-mpi-dir=$CUDA_DIR/../comm_libs/mpi/ -with-cuda-dir=$CUDA_DIR/11.0  --download-f2cblaslapack=1

  • to install Kokkos (with --download-kokkos --download-kokkos-kernels, set CUDA_ROOT before running configure - i.e: export CUDA_ROOT=$CUDA_DIR/11.0

  • Log into interactive compute nodes with qsub -I -t TimeInMinutes -n 1 -A AProjectName (for example, gpu_hack) (-q single-gpu will give you access to one GPU, and is often much quicker; otherwise you get access to all eight GPUs on a node)

  • Run executables with $CUDA_DIR/../comm_libs/mpi/bin/mpirun

  • It’s also possible to build petsc on compute nodes. For this - one can use qsub --attrs=pubnet to obtain a compute node with network access enabled (for the build) as an alternative to setting up http_proxy/https_proxy

OLCF - Oak Ridge National Laboratory - Summit machine - NVIDIA GPUs and IBM Power PC processors

  • Project ID: CSC314

  • PI: Barry Smith

  • Apply at: https://docs.olcf.ornl.gov/accounts/accounts_and_projects.html#applying-for-a-user-account

  • Notes on usage:

    • Getting Started

    • Log into summit.olcf.ornl.gov

      > module load cmake hdf5 cuda
      > module load pgi
      > module load essl netlib-lapack xl
      > module load gcc
      
    • Use config/examples/arch-olcf-opt.py as a template for running configure

    • You configure PETSc and build examples in your home directory, but launch them from your “work” directory.

    • Use the bsub command to submit jobs to the queue. See the “Batch Scripts” section here running jobs

    • Tools for profiling - -log_view that adds GPU communication and computation to the summary table - nvprof and nvvp from the CUDA toolkit

Installing PETSc on an iOS or Android platform

For iOS see $PETSC_DIR/systems/Apple/iOS/bin/makeall. A thorough discussion of the installation procedure is given here.

For Android, you must have your standalone bin folder in the path, so that the compilers are visible.

Check config/examples/arch-arm64-opt.py for iOS and config/examples/arch-armv7-opt.py for example usage.