Numerical methods that are based on or utilize the idea of iteration. Such methods are widely used in the solution of many different types of problem, ranging from linear and nonlinear optimization to discretized systems of partial differential equations. Starting from an initial estimate x0 of the solution x*, the methods generate a sequence of approximations x0, x1, x2,…. The main objectives are to design methods that will converge from poor initial estimates and also converge rapidly in the vicinity of x*. Different ideas may be employed in these two phases. Newton’s method, together with its variants, is of fundamental importance for all types of nonlinear equations.
For the linear system Ax = b where A is large and perhaps sparse (see sparse matrix), or has some other special structure, an important class of iterative methods is obtained by ‘splitting’ A into the form A = M − N. The splitting is such that systems of the form Mz = d are ‘easy’ to solve, e.g. M could be lower triangular. The iteration then takes the form
where
x0 is an approximation to the solution. Convergence for any
x0 is guaranteed if all the eigenvalues (
see eigenvalue problems) of M
−1N have modulus less than one. The objective is to choose splittings for which each step is efficient and the convergence is rapid.
In partial differential equations, linear systems arise for which the method of successive over-relaxation is particularly suitable. This is given by
where A = D + L + U, D consists of the diagonal elements of A, and L, U are respectively the strictly lower and upper triangular parts. The scalar ω is a free parameter and is chosen to try to maximize the rate of convergence. For special problems in partial differential equations, optimal values of ω can be computed. More recently the successive over-relaxation method is an important technique in the multigrid method.