.. _ch_ksp: KSP: Linear System Solvers -------------------------- The ``KSP`` object is the heart of PETSc, because it provides uniform and efficient access to all of the package’s linear system solvers, including parallel and sequential, direct and iterative. ``KSP`` is intended for solving systems of the form .. math:: :label: eq_axeqb A x = b, where :math:`A` denotes the matrix representation of a linear operator, :math:`b` is the right-hand-side vector, and :math:`x` is the solution vector. ``KSP`` uses the same calling sequence for both direct and iterative solution of a linear system. In addition, particular solution techniques and their associated options can be selected at runtime. The combination of a Krylov subspace method and a preconditioner is at the center of most modern numerical codes for the iterative solution of linear systems. Many textbooks (e.g. :cite:`fgn` :cite:`VanDerVorst2003`, or :cite:`saad2003`) provide an overview of the theory of such methods. The ``KSP`` package, discussed in :any:`sec_ksp`, provides many popular Krylov subspace iterative methods; the ``PC`` module, described in :any:`sec_pc`, includes a variety of preconditioners. .. _sec_usingksp: Using KSP ~~~~~~~~~ To solve a linear system with ``KSP``, one must first create a solver context with the command .. code-block:: KSPCreate(MPI_Comm comm,KSP *ksp); Here ``comm`` is the MPI communicator and ``ksp`` is the newly formed solver context. Before actually solving a linear system with ``KSP``, the user must call the following routine to set the matrices associated with the linear system: .. code-block:: KSPSetOperators(KSP ksp,Mat Amat,Mat Pmat); The argument ``Amat``, representing the matrix that defines the linear system, is a symbolic placeholder for any kind of matrix or operator. In particular, ``KSP`` *does* support matrix-free methods. The routine ``MatCreateShell()`` in :any:`sec_matrixfree` provides further information regarding matrix-free methods. Typically, the matrix from which the preconditioner is to be constructed, ``Pmat``, is the same as the matrix that defines the linear system, ``Amat``; however, occasionally these matrices differ (for instance, when a preconditioning matrix is obtained from a lower order method than that employed to form the linear system matrix). Much of the power of ``KSP`` can be accessed through the single routine .. code-block:: KSPSetFromOptions(KSP ksp); This routine accepts the option ``-help`` as well as any of the ``KSP`` and ``PC`` options discussed below. To solve a linear system, one sets the right hand size and solution vectors using the command .. code-block:: KSPSolve(KSP ksp,Vec b,Vec x); where ``b`` and ``x`` respectively denote the right-hand-side and solution vectors. On return, the iteration number at which the iterative process stopped can be obtained using .. code-block:: KSPGetIterationNumber(KSP ksp, PetscInt *its); Note that this does not state that the method converged at this iteration: it can also have reached the maximum number of iterations, or have diverged. :any:`sec_convergencetests` gives more details regarding convergence testing. Note that multiple linear solves can be performed by the same ``KSP`` context. Once the ``KSP`` context is no longer needed, it should be destroyed with the command .. code-block:: KSPDestroy(KSP *ksp); The above procedure is sufficient for general use of the ``KSP`` package. One additional step is required for users who wish to customize certain preconditioners (e.g., see :any:`sec_bjacobi`) or to log certain performance data using the PETSc profiling facilities (as discussed in :any:`ch_profiling`). In this case, the user can optionally explicitly call .. code-block:: KSPSetUp(KSP ksp); before calling ``KSPSolve()`` to perform any setup required for the linear solvers. The explicit call of this routine enables the separate monitoring of any computations performed during the set up phase, such as incomplete factorization for the ILU preconditioner. The default solver within ``KSP`` is restarted GMRES, preconditioned for the uniprocess case with ILU(0), and for the multiprocess case with the block Jacobi method (with one block per process, each of which is solved with ILU(0)). A variety of other solvers and options are also available. To allow application programmers to set any of the preconditioner or Krylov subspace options directly within the code, we provide routines that extract the ``PC`` and ``KSP`` contexts, .. code-block:: KSPGetPC(KSP ksp,PC *pc); The application programmer can then directly call any of the ``PC`` or ``KSP`` routines to modify the corresponding default options. To solve a linear system with a direct solver (currently supported by PETSc for sequential matrices, and by several external solvers through PETSc interfaces, see :any:`sec_externalsol`) one may use the options ``-ksp_type`` ``preonly`` (or the equivalent ``-ksp_type`` ``none``) ``-pc_type`` ``lu`` (see below). By default, if a direct solver is used, the factorization is *not* done in-place. This approach prevents the user from the unexpected surprise of having a corrupted matrix after a linear solve. The routine ``PCFactorSetUseInPlace()``, discussed below, causes factorization to be done in-place. Solving Successive Linear Systems ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When solving multiple linear systems of the same size with the same method, several options are available. To solve successive linear systems having the *same* preconditioner matrix (i.e., the same data structure with exactly the same matrix elements) but different right-hand-side vectors, the user should simply call ``KSPSolve()`` multiple times. The preconditioner setup operations (e.g., factorization for ILU) will be done during the first call to ``KSPSolve()`` only; such operations will *not* be repeated for successive solves. To solve successive linear systems that have *different* preconditioner matrices (i.e., the matrix elements and/or the matrix data structure change), the user *must* call ``KSPSetOperators()`` and ``KSPSolve()`` for each solve. .. _sec_ksp: Krylov Methods ~~~~~~~~~~~~~~ The Krylov subspace methods accept a number of options, many of which are discussed below. First, to set the Krylov subspace method that is to be used, one calls the command .. code-block:: KSPSetType(KSP ksp,KSPType method); The type can be one of ``KSPRICHARDSON``, ``KSPCHEBYSHEV``, ``KSPCG``, ``KSPGMRES``, ``KSPTCQMR``, ``KSPBCGS``, ``KSPCGS``, ``KSPTFQMR``, ``KSPCR``, ``KSPLSQR``, ``KSPBICG``, ``KSPPREONLY`` (or the equivalent ``KSPNONE``), or others; see :any:`tab-kspdefaults` or the ``KSPType`` man page for more. The ``KSP`` method can also be set with the options database command ``-ksp_type``, followed by one of the options ``richardson``, ``chebyshev``, ``cg``, ``gmres``, ``tcqmr``, ``bcgs``, ``cgs``, ``tfqmr``, ``cr``, ``lsqr``, ``bicg``, ``preonly`` (or the equivalent ``none``), or others (see :any:`tab-kspdefaults` or the ``KSPType`` man page). There are method-specific options. For instance, for the Richardson, Chebyshev, and GMRES methods: .. code-block:: KSPRichardsonSetScale(KSP ksp,PetscReal scale); KSPChebyshevSetEigenvalues(KSP ksp,PetscReal emax,PetscReal emin); KSPGMRESSetRestart(KSP ksp,PetscInt max_steps); The default parameter values are ``scale=1.0, emax=0.01, emin=100.0``, and ``max_steps=30``. The GMRES restart and Richardson damping factor can also be set with the options ``-ksp_gmres_restart `` and ``-ksp_richardson_scale ``. The default technique for orthogonalization of the Krylov vectors in GMRES is the unmodified (classical) Gram-Schmidt method, which can be set with .. code-block:: KSPGMRESSetOrthogonalization(KSP ksp,KSPGMRESClassicalGramSchmidtOrthogonalization); or the options database command ``-ksp_gmres_classicalgramschmidt``. By default this will *not* use iterative refinement to improve the stability of the orthogonalization. This can be changed with the option .. code-block:: KSPGMRESSetCGSRefinementType(KSP ksp,KSPGMRESCGSRefinementType type) or via the options database with :: -ksp_gmres_cgs_refinement_type The values for ``KSPGMRESCGSRefinementType()`` are ``KSP_GMRES_CGS_REFINE_NEVER``, ``KSP_GMRES_CGS_REFINE_IFNEEDED`` and ``KSP_GMRES_CGS_REFINE_ALWAYS``. One can also use modified Gram-Schmidt, by using the orthogonalization routine ``KSPGMRESModifiedGramSchmidtOrthogonalization()`` or by using the command line option ``-ksp_gmres_modifiedgramschmidt``. For the conjugate gradient method with complex numbers, there are two slightly different algorithms depending on whether the matrix is Hermitian symmetric or truly symmetric (the default is to assume that it is Hermitian symmetric). To indicate that it is symmetric, one uses the command .. code-block:: KSPCGSetType(ksp,KSP_CG_SYMMETRIC); Note that this option is not valid for all matrices. Some ``KSP`` types do not support preconditioning. For instance, the CGLS algorithm does not involve a preconditioner; any preconditioner set to work with the ``KSP`` object is ignored if ``KSPCGLS`` was selected. By default, ``KSP`` assumes an initial guess of zero by zeroing the initial value for the solution vector that is given; this zeroing is done at the call to ``KSPSolve()``. To use a nonzero initial guess, the user *must* call .. code-block:: KSPSetInitialGuessNonzero(KSP ksp,PetscBool flg); .. _sec_ksppc: Preconditioning within KSP ^^^^^^^^^^^^^^^^^^^^^^^^^^ Since the rate of convergence of Krylov projection methods for a particular linear system is strongly dependent on its spectrum, preconditioning is typically used to alter the spectrum and hence accelerate the convergence rate of iterative techniques. Preconditioning can be applied to the system :eq:`eq_axeqb` by .. math:: :label: eq_prec (M_L^{-1} A M_R^{-1}) \, (M_R x) = M_L^{-1} b, where :math:`M_L` and :math:`M_R` indicate preconditioning matrices (or, matrices from which the preconditioner is to be constructed). If :math:`M_L = I` in :eq:`eq_prec`, right preconditioning results, and the residual of :eq:`eq_axeqb`, .. math:: r \equiv b - Ax = b - A M_R^{-1} \, M_R x, is preserved. In contrast, the residual is altered for left (:math:`M_R = I`) and symmetric preconditioning, as given by .. math:: r_L \equiv M_L^{-1} b - M_L^{-1} A x = M_L^{-1} r. By default, most KSP implementations use left preconditioning. Some more naturally use other options, though. For instance, ``KSPQCG`` defaults to use symmetric preconditioning and ``KSPFGMRES`` uses right preconditioning by default. Right preconditioning can be activated for some methods by using the options database command ``-ksp_pc_side right`` or calling the routine .. code-block:: KSPSetPCSide(ksp,PC_RIGHT); Attempting to use right preconditioning for a method that does not currently support it results in an error message of the form .. code-block:: none KSPSetUp_Richardson:No right preconditioning for KSPRICHARDSON We summarize the defaults for the residuals used in KSP convergence monitoring within :any:`tab-kspdefaults`. Details regarding specific convergence tests and monitoring routines are presented in the following sections. The preconditioned residual is used by default for convergence testing of all left-preconditioned ``KSP`` methods. For the conjugate gradient, Richardson, and Chebyshev methods the true residual can be used by the options database command ``-ksp_norm_type unpreconditioned`` or by calling the routine .. code-block:: KSPSetNormType(ksp,KSP_NORM_UNPRECONDITIONED); .. list-table:: KSP Objects :name: tab-kspdefaults :header-rows: 1 * - Method - KSPType - Options Database Name * - Richardson - ``KSPRICHARDSON`` - ``richardson`` * - Chebyshev - ``KSPCHEBYSHEV`` - ``chebyshev`` * - Conjugate Gradient :cite:`hs:52` - ``KSPCG`` - ``cg`` * - Pipelined Conjugate Gradients :cite:`GhyselsVanroose2014` - ``KSPPIPECG`` - ``pipecg`` * - Pipelined Conjugate Gradients (Gropp) - ``KSPGROPPCG`` - ``groppcg`` * - Pipelined Conjugate Gradients with Residual Replacement - ``KSPPIPECGRR`` - ``pipecgrr`` * - Conjugate Gradients for the Normal Equations - ``KSPCGNE`` - ``cgne`` * - Flexible Conjugate Gradients :cite:`flexibleCG` - ``KSPFCG`` - ``fcg`` * -  Pipelined, Flexible Conjugate Gradients :cite:`SananSchneppMay2016` - ``KSPPIPEFCG`` - ``pipefcg`` * - Conjugate Gradients for Least Squares - ``KSPCGLS`` - ``cgls`` * - Conjugate Gradients with Constraint (1) - ``KSPNASH`` - ``nash`` * - Conjugate Gradients with Constraint (2) - ``KSPSTCG`` - ``stcg`` * - Conjugate Gradients with Constraint (3) - ``KSPGLTR`` - ``gltr`` * - Conjugate Gradients with Constraint (4) - ``KSPQCG`` - ``qcg`` * - BiConjugate Gradient - ``KSPBICG`` - ``bicg`` * - BiCGSTAB :cite:`v:92` - ``KSPBCGS`` - ``bcgs`` * - Improved BiCGSTAB - ``KSPIBCGS`` - ``ibcgs`` * - QMRCGSTAB :cite:`chan1994qmrcgs` - ``KSPQMRCGS`` - ``qmrcgs`` * - Flexible BiCGSTAB - ``KSPFBCGS`` - ``fbcgs`` * - Flexible BiCGSTAB (variant) - ``KSPFBCGSR`` - ``fbcgsr`` * - Enhanced BiCGSTAB(L) - ``KSPBCGSL`` - ``bcgsl`` * - Minimal Residual Method :cite:`PaigeSaunders1975` - ``KSPMINRES`` - ``minres`` * - Generalized Minimal Residual :cite:`ss:86` - ``KSPGMRES`` - ``gmres`` * - Flexible Generalized Minimal Residual :cite:`Saad1993` - ``KSPFGMRES`` - ``fgmres`` * - Deflated Generalized Minimal Residual - ``KSPDGMRES`` - ``dgmres`` * - Pipelined Generalized Minimal Residual :cite:`GhyselsAshbyMeerbergenVanroose2013` - ``KSPPGMRES`` - ``pgmres`` * - Pipelined, Flexible Generalized Minimal Residual :cite:`SananSchneppMay2016` - ``KSPPIPEFGMRES`` - ``pipefgmres`` * - Generalized Minimal Residual with Accelerated Restart - ``KSPLGMRES`` - ``lgmres`` * - Conjugate Residual :cite:`eisenstat1983variational` - ``KSPCR`` - ``cr`` * - Generalized Conjugate Residual - ``KSPGCR`` - ``gcr`` * - Pipelined Conjugate Residual - ``KSPPIPECR`` - ``pipecr`` * - Pipelined, Flexible Conjugate Residual :cite:`SananSchneppMay2016` - ``KSPPIPEGCR`` - ``pipegcr`` * - FETI-DP - ``KSPFETIDP`` - ``fetidp`` * - Conjugate Gradient Squared :cite:`so:89` - ``KSPCGS`` - ``cgs`` * - Transpose-Free Quasi-Minimal Residual (1) :cite:`f:93` - ``KSPTFQMR`` - ``tfqmr`` * - Transpose-Free Quasi-Minimal Residual (2) - ``KSPTCQMR`` - ``tcqmr`` * - Least Squares Method - ``KSPLSQR`` - ``lsqr`` * - Symmetric LQ Method :cite:`PaigeSaunders1975` - ``KSPSYMMLQ`` - ``symmlq`` * - TSIRM - ``KSPTSIRM`` - ``tsirm`` * - Python Shell - ``KSPPYTHON`` - ``python`` * - Shell for no ``KSP`` method - ``KSPPREONLY`` (or ``KSPNONE``) - ``preonly`` (or ``none``) Note: the bi-conjugate gradient method requires application of both the matrix and its transpose plus the preconditioner and its transpose. Currently not all matrices and preconditioners provide this support and thus the ``KSPBICG`` cannot always be used. Note: PETSc implements the FETI-DP (Finite Element Tearing and Interconnecting Dual-Primal) method as an implementation of ``KSP`` since it recasts the original problem into a constrained minimization one with Lagrange multipliers. The only matrix type supported is ``MATIS``. Support for saddle point problems is provided. See the man page for ``KSPFETIDP`` for further details. .. _sec_convergencetests: Convergence Tests ^^^^^^^^^^^^^^^^^ The default convergence test, ``KSPConvergedDefault()``, is based on the :math:`l_2`-norm of the residual. Convergence (or divergence) is decided by three quantities: the decrease of the residual norm relative to the norm of the right hand side, ``rtol``, the absolute size of the residual norm, ``atol``, and the relative increase in the residual, ``dtol``. Convergence is detected at iteration :math:`k` if .. math:: \| r_k \|_2 < {\rm max} ( \text{rtol} * \| b \|_2, \text{atol}), where :math:`r_k = b - A x_k`. Divergence is detected if .. math:: \| r_k \|_2 > \text{dtol} * \| b \|_2. These parameters, as well as the maximum number of allowable iterations, can be set with the routine .. code-block:: KSPSetTolerances(KSP ksp,PetscReal rtol,PetscReal atol,PetscReal dtol,PetscInt maxits); The user can retain the default value of any of these parameters by specifying ``PETSC_DEFAULT`` as the corresponding tolerance; the defaults are ``rtol=1e-5``, ``atol=1e-50``, ``dtol=1e5``, and ``maxits=1e4``. These parameters can also be set from the options database with the commands ``-ksp_rtol`` ````, ``-ksp_atol`` ````, ``-ksp_divtol`` ````, and ``-ksp_max_it`` ````. In addition to providing an interface to a simple convergence test, ``KSP`` allows the application programmer the flexibility to provide customized convergence-testing routines. The user can specify a customized routine with the command .. code-block:: KSPSetConvergenceTest(KSP ksp,PetscErrorCode (*test)(KSP ksp,PetscInt it,PetscReal rnorm, KSPConvergedReason *reason,void *ctx),void *ctx,PetscErrorCode (*destroy)(void *ctx)); The final routine argument, ``ctx``, is an optional context for private data for the user-defined convergence routine, ``test``. Other ``test`` routine arguments are the iteration number, ``it``, and the residual’s :math:`l_2` norm, ``rnorm``. The routine for detecting convergence, ``test``, should set ``reason`` to positive for convergence, 0 for no convergence, and negative for failure to converge. A full list of possible values is given in the ``KSPConvergedReason`` manual page. You can use ``KSPGetConvergedReason()`` after ``KSPSolve()`` to see why convergence/divergence was detected. .. _sec_kspmonitor: Convergence Monitoring ^^^^^^^^^^^^^^^^^^^^^^ By default, the Krylov solvers run silently without displaying information about the iterations. The user can indicate that the norms of the residuals should be displayed by using ``-ksp_monitor`` within the options database. To display the residual norms in a graphical window (running under X Windows), one should use ``-ksp_monitor draw::draw_lg``. Application programmers can also provide their own routines to perform the monitoring by using the command .. code-block:: KSPMonitorSet(KSP ksp,PetscErrorCode (*mon)(KSP ksp,PetscInt it,PetscReal rnorm,void *ctx),void *ctx,PetscErrorCode (*mondestroy)(void**)); The final routine argument, ``ctx``, is an optional context for private data for the user-defined monitoring routine, ``mon``. Other ``mon`` routine arguments are the iteration number (``it``) and the residual’s :math:`l_2` norm (``rnorm``). A helpful routine within user-defined monitors is ``PetscObjectGetComm((PetscObject)ksp,MPI_Comm *comm)``, which returns in ``comm`` the MPI communicator for the ``KSP`` context. See :any:`sec_writing` for more discussion of the use of MPI communicators within PETSc. Several monitoring routines are supplied with PETSc, including .. code-block:: KSPMonitorResidual(KSP,PetscInt,PetscReal, void *); KSPMonitorSingularValue(KSP,PetscInt,PetscReal,void *); KSPMonitorTrueResidual(KSP,PetscInt,PetscReal, void *); The default monitor simply prints an estimate of the :math:`l_2`-norm of the residual at each iteration. The routine ``KSPMonitorSingularValue()`` is appropriate only for use with the conjugate gradient method or GMRES, since it prints estimates of the extreme singular values of the preconditioned operator at each iteration. Since ``KSPMonitorTrueResidual()`` prints the true residual at each iteration by actually computing the residual using the formula :math:`r = b - Ax`, the routine is slow and should be used only for testing or convergence studies, not for timing. These monitors may be accessed with the command line options ``-ksp_monitor``, ``-ksp_monitor_singular_value``, and ``-ksp_monitor_true_residual``. To employ the default graphical monitor, one should use the command ``-ksp_monitor draw::draw_lg``. One can cancel hardwired monitoring routines for KSP at runtime with ``-ksp_monitor_cancel``. Unless the Krylov method converges so that the residual norm is small, say :math:`10^{-10}`, many of the final digits printed with the ``-ksp_monitor`` option are meaningless. Worse, they are different on different machines; due to different round-off rules used by, say, the IBM RS6000 and the Sun SPARC. This makes testing between different machines difficult. The option ``-ksp_monitor_short`` causes PETSc to print fewer of the digits of the residual norm as it gets smaller; thus on most of the machines it will always print the same numbers making cross system testing easier. Understanding the Operator’s Spectrum ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Since the convergence of Krylov subspace methods depends strongly on the spectrum (eigenvalues) of the preconditioned operator, PETSc has specific routines for eigenvalue approximation via the Arnoldi or Lanczos iteration. First, before the linear solve one must call .. code-block:: KSPSetComputeEigenvalues(ksp,PETSC_TRUE); Then after the ``KSP`` solve one calls .. code-block:: KSPComputeEigenvalues(KSP ksp,PetscInt n,PetscReal *realpart,PetscReal *complexpart,PetscInt *neig); Here, ``n`` is the size of the two arrays and the eigenvalues are inserted into those two arrays. ``neig`` is the number of eigenvalues computed; this number depends on the size of the Krylov space generated during the linear system solution, for GMRES it is never larger than the restart parameter. There is an additional routine .. code-block:: KSPComputeEigenvaluesExplicitly(KSP ksp, PetscInt n,PetscReal *realpart,PetscReal *complexpart); that is useful only for very small problems. It explicitly computes the full representation of the preconditioned operator and calls LAPACK to compute its eigenvalues. It should be only used for matrices of size up to a couple hundred. The ``PetscDrawSP*()`` routines are very useful for drawing scatter plots of the eigenvalues. The eigenvalues may also be computed and displayed graphically with the options data base commands ``-ksp_view_eigenvalues draw`` and ``-ksp_view_eigenvalues_explicit draw``. Or they can be dumped to the screen in ASCII text via ``-ksp_view_eigenvalues`` and ``-ksp_view_eigenvalues_explicit``. .. _sec_flexibleksp: Flexible Krylov Methods ^^^^^^^^^^^^^^^^^^^^^^^ Standard Krylov methods require that the preconditioner be a linear operator, thus, for example, a standard ``KSP`` method cannot use a ``KSP`` in its preconditioner, as is common in the Block-Jacobi method ``PCBJACOBI``, for example. Flexible Krylov methods are a subset of methods that allow (with modest additional requirements on memory) the preconditioner to be nonlinear. For example, they can be used with the ``PCKSP`` preconditioner. The flexible ``KSP`` methods have the label "Flexible" in :any:`tab-kspdefaults`. One can use ``KSPMonitorDynamicTolerance()`` to control the tolerances used by inner ``KSP`` solvers in ``PCKSP``, ``PCBJACOBI``, and ``PCDEFLATION``. In addition to supporting ``PCKSP``, the flexible methods support ``KSP*SetModifyPC()``, for example, ``KSPFGMRESSetModifyPC()``, these functions allow the user to provide a callback function that changes the preconditioner at each Krylov iteration. Its calling sequence is as follows. .. code-block:: PetscErrorCode f(KSP ksp,PetscInt total_its,PetscInt its_since_restart,PetscReal res_norm,void *ctx); .. _sec_pipelineksp: Pipelined Krylov Methods ^^^^^^^^^^^^^^^^^^^^^^^^ Standard Krylov methods have one or more global reductions resulting from the computations of inner products or norms in each iteration. These reductions need to block until all MPI ranks have received the results. For a large number of MPI ranks (this number is machine dependent but can be above 10,000 ranks) this synchronization is very time consuming and can significantly slow the computation. Pipelined Krylov methods overlap the reduction operations with local computations (generally the application of the matrix-vector products and precondtiioners) thus effectively "hiding" the time of the reductions. In addition, they may reduce the number of global synchronizations by rearranging the computations in a way that some of them can be collapsed, e.g., two or more calls to ``MPI_Allreduce()`` may be combined into one call. The pipeline ``KSP`` methods have the label "Pipeline" in :any:`tab-kspdefaults`. Special configuration of MPI may be necessary for reductions to make asynchronous progress, which is important for performance of pipelined methods. See :any:`doc_faq_pipelined` for details. Other KSP Options ^^^^^^^^^^^^^^^^^ To obtain the solution vector and right hand side from a ``KSP`` context, one uses .. code-block:: KSPGetSolution(KSP ksp,Vec *x); KSPGetRhs(KSP ksp,Vec *rhs); During the iterative process the solution may not yet have been calculated or it may be stored in a different location. To access the approximate solution during the iterative process, one uses the command .. code-block:: KSPBuildSolution(KSP ksp,Vec w,Vec *v); where the solution is returned in ``v``. The user can optionally provide a vector in ``w`` as the location to store the vector; however, if ``w`` is ``NULL``, space allocated by PETSc in the ``KSP`` context is used. One should not destroy this vector. For certain ``KSP`` methods (e.g., GMRES), the construction of the solution is expensive, while for many others it doesn’t even require a vector copy. Access to the residual is done in a similar way with the command .. code-block:: KSPBuildResidual(KSP ksp,Vec t,Vec w,Vec *v); Again, for GMRES and certain other methods this is an expensive operation. .. _sec_pc: Preconditioners ~~~~~~~~~~~~~~~ As discussed in :any:`sec_ksppc`, Krylov subspace methods are typically used in conjunction with a preconditioner. To employ a particular preconditioning method, the user can either select it from the options database using input of the form ``-pc_type `` or set the method with the command .. code-block:: PCSetType(PC pc,PCType method); In :any:`tab-pcdefaults` we summarize the basic preconditioning methods supported in PETSc. See the ``PCType`` manual page for a complete list. The ``PCSHELL`` preconditioner uses a specific, application-provided preconditioner. The direct preconditioner, ``PCLU`` , is, in fact, a direct solver for the linear system that uses LU factorization. ``PCLU`` is included as a preconditioner so that PETSc has a consistent interface among direct and iterative linear solvers. .. list-table:: PETSc Preconditioners (partial list) :name: tab-pcdefaults :header-rows: 1 * - Method - PCType - Options Database Name * - Jacobi - ``PCJACOBI`` - ``jacobi`` * - Block Jacobi - ``PCBJACOBI`` - ``bjacobi`` * - SOR (and SSOR) - ``PCSOR`` - ``sor`` * - SOR with Eisenstat trick - ``PCEISENSTAT`` - ``eisenstat`` * - Incomplete Cholesky - ``PCICC`` - ``icc`` * - Incomplete LU - ``PCILU`` - ``ilu`` * - Additive Schwarz - ``PCASM`` - ``asm`` * - Generalized Additive Schwarz - ``PCGASM`` - ``gasm`` * - Algebraic Multigrid - ``PCGAMG`` - ``gamg`` * - Balancing Domain Decomposition by Constraints - ``PCBDDC`` - ``bddc`` * - Linear solver - ``PCKSP`` - ``ksp`` * - Combination of preconditioners - ``PCCOMPOSITE`` - ``composite`` * - LU - ``PCLU`` - ``lu`` * - Cholesky - ``PCCHOLESKY`` - ``cholesky`` * - No preconditioning - ``PCNONE`` - ``none`` * - Shell for user-defined ``PC`` - ``PCSHELL`` - ``shell`` Each preconditioner may have associated with it a set of options, which can be set with routines and options database commands provided for this purpose. Such routine names and commands are all of the form ``PC