In the case of a linear system the two main classes are the **stationary iterative methods**, and the more general **Krylov subspace methods**.

Table of contents |

2 Krylov subspace methods 3 Convergence 4 Preconditioners 5 History 6 External links |

Stationary iterative methods solve a system with an operator approximating the original one; and based on a measurement of the error (the residual) form a correction equation for which this process is repeated. While these methods are simple to derive, implement, and analyse, convergence is only guaranteed for a limited class of matrices.

Krylov subspace methods form an orthogonal basis of the sequence of successive matrix powers times the initial residual (the **Krylov sequence**). The approximations to the solution are then formed by minimizing the residual over the subspace formed. The prototypical method in this class is the conjugate gradient method.

Since these methods form a basis, it is evident that the method converges in *N* iterations, where *N* is the system size. However, in the presence of rounding errors this statement does not hold; moreover, in practice *N* can be very large, and the iterative process reaches sufficient accuracy already far earlier. The analysis of these methods is hard, depending on a complicated function of the spectrum of the operator.

The **Iterative method** is one of the software development processes, for example in Barry Boehm's 1980s spiral model, which influenced extreme programming practices in the 2000s. See iterative development for the detail.