#include "petscksp.h" PetscErrorCode KSPSolve(KSP ksp,Vec b,Vec x)Collective on KSP
ksp | - iterative context obtained from KSPCreate() | |
b | - the right hand side vector | |
x | - the solution (this may be the same vector as b, then b will be overwritten with answer) |
-ksp_compute_eigenvalues | - compute preconditioned operators eigenvalues | |
-ksp_plot_eigenvalues | - plot the computed eigenvalues in an X-window | |
-ksp_plot_eigencontours | - plot the computed eigenvalues in an X-window with contours | |
-ksp_compute_eigenvalues_explicitly | - compute the eigenvalues by forming the dense operator and using LAPACK | |
-ksp_plot_eigenvalues_explicitly | - plot the explicitly computing eigenvalues | |
-ksp_view_mat binary | - save matrix to the default binary viewer | |
-ksp_view_pmat binary | - save matrix used to build preconditioner to the default binary viewer | |
-ksp_view_rhs binary | - save right hand side vector to the default binary viewer | |
-ksp_view_solution binary | - save computed solution vector to the default binary viewer (can be read later with src/ksp/examples/tutorials/ex10.c for testing solvers) | |
-ksp_view_mat_explicit | - for matrix-free operators, computes the matrix entries and views them | |
-ksp_view_preconditioned_operator_explicit | - computes the product of the preconditioner and matrix as an explicit matrix and views it | |
-ksp_converged_reason | - print reason for converged or diverged, also prints number of iterations | |
-ksp_final_residual | - print 2-norm of true linear system residual at the end of the solution process | |
-ksp_view | - print the ksp data structure at the end of the system solution |
If one uses KSPSetDM() then x or b need not be passed. Use KSPGetSolution() to access the solution in this case.
The operator is specified with KSPSetOperators().
Call KSPGetConvergedReason() to determine if the solver converged or failed and why. The number of iterations can be obtained from KSPGetIterationNumber().
If you provide a matrix that has a MatSetNullSpace() and MatSetTransposeNullSpace() this will use that information to solve singular systems in the least squares sense with a norm minimizing solution.
A x = b where b = b_p + b_t where b_t is not in the range of A (and hence by the fundamental theorem of linear algebra is in the nullspace(A') see MatSetNullSpace()
KSP first removes b_t producing the linear system A x = b_p (which has multiple solutions) and solves this to find the ||x|| minimizing solution (and hence
it finds the solution x orthogonal to the nullspace(A). The algorithm is simply in each iteration of the Krylov method we remove the nullspace(A) from the search
direction thus the solution which is a linear combination of the search directions has no component in the nullspace(A).
We recommend always using GMRES for such singular systems.
If nullspace(A) = nullspace(A') (note symmetric matrices always satisfy this property) then both left and right preconditioning will work
If nullspace(A) != nullspace(A') then left preconditioning will work but right preconditioning may not work (or it may).
Developer Note: The reason we cannot always solve nullspace(A) != nullspace(A') systems with right preconditioning is because we need to remove at each iteration the nullspace(AB) from the search direction. While we know the nullspace(A) the nullspace(AB) equals B^-1 times the nullspace(A) but except for trivial preconditioners such as diagonal scaling we cannot apply the inverse of the preconditioner to a vector and thus cannot compute the nullspace(AB).
If using a direct method (e.g., via the KSP solver KSPPREONLY and a preconditioner such as PCLU/PCILU), then its=1. See KSPSetTolerances() and KSPConvergedDefault() for more details.