**Numerical analysis** is that branch of applied mathematics which studies the methods and algorithms to find (approximate) numerical solutions to various mathematical problems, using a finite sequence of arithmetic and logical operations. Most solutions of numerical problems build on the theory of linear algebra.

Table of contents |

2 Computers as Tools for Numerical Analysis 3 Problem Taxonomy 4 Finding Zeros 5 Partial Differential Equations 6 See Also |

The problems considered by numerical analysis include:

- Computing values of functions:
- Evaluating polynomials using Horner's rule
- Handling round-off error estimation

- Solving linear equations or systems of linear equations, by for example
- Gauss-Jordan elimination
- LU-factorization

- Approximating solutions to nonlinear problems, often by linearization especially using Newton's method
- Regression
- Least squares-method

- Numerically evaluating integrals using numerical integration, also known as quadrature
- Solving differential equationss and partial differential equations or approximating their solutions
- Interpolation using
- Linear interpolation
- Polynomial interpolation, while keeping an eye on Runge's phenomenon
- Splines (linear, quadratic, cubic)
- Wavelets

- Extrapolation

*Stability*- the errors in the approximation may not grow in an uncontrolled manner when the computations increase in size and scope.*Accuracy*- the numerical approximation should be as accurate as possible and the error should be possible to estimate according to some scale.*Efficiency*- the faster the computation, the better is the method. You may however hit tradeoffs between accuracy and efficiency. Therefore, fast convergence of solutions are to be sought at all times.

Computers are an essential tool in numerical analysis, but the field predates computers by many centuries, and actually computers were invented to a large extent in order to solve numerical problems, not the other way around. Taylor approximation is a product of the seventeenth and eighteenth centuries that is still very important. The logarithms of the sixteenth century are no longer vital to numerical analysis, but the associated and even prehistoric notion of interpolation continues to solve problems for us.

Floating point number representations are used extensively in modern computers: for many problems, their behavior can be unexpected, unless care is taken using numerical analysis to ensure that they will not misbehave.

A well-conditioned mathematical problem is, roughly speaking, one whose solution changes by only a small amount if the problem data are changed by a small amount. The analogous concept for the numerical algorithm for solving the problem is that of numerical stability: an algorithm for solving a well-conditioned problem is numerically stable if the result of the algorithm changes only a small amount if the data change a little.

An algorithm that solves a well-conditioned problem may or may not be numerically stable. An art of numerical analysis is to find a stable algorithm for solving a mathematical problem.

The study of the generation and propagation of round-off errors in the cause of a computation is an important part of numerical analysis. Subtraction of two nearly equal numbers is an ill-conditioned operation, producing catastrophic loss of significance.

The effect of round-off error is partly quantified in the condition number of an operator.

One fundamental problem is the determination of zeros of a given function. Various algorithms have been developed. If the function is differentiable and the derivative is known, then Newton's method is a popular choice.

Numerical Analysis is also concerned with computing (in an approximate way) the solution of Partial Differential Equations. This is done by first discretizing the equation, bringing it into a finite dimensional subspace, then solving the linear system in this finite dimensional space. The first stage is done by the Finite element method, finite difference methods, or (particularly in engineering) the method of Finite Volumes. The theoretical justification of these methods often involves theorems from functional analysis.

The linear systems that come form discretized Partial Differential Equations can then be solved by a variant of Gauss-Jordan elimination, by some Iterative method such as Conjugate Gradients, or by Multigrid.

For very large problems, the partial differential equation can be split into smaller subproblems and solved in parallel, as in domain decomposition methods.