Table of contents |

2 Methods |

Numerical integration is performed for three main reasons:

- The function is only known in certain discrete points, such as obtained by sampling. Several embedded systems and other computer applications may need a lot of numerical integration for this reason.
- We know the function , but it is impossible to calculate the integral analytically, because a primitive function, which is needed for the integration, is impossible to obtain. (One such function is the probability density function of the normal distribution.)
- We know the function , but it is is
*too hard*to solve it analytically, and we want to fall back on approximation.

The most primitive numerical integration method is suggested by the Riemann integral:

- (*)

The disadvantage of this algorithm is that it has extremely poor convergence properties. In order to get any precision at all, one needs *n* to be extremely large, even for extremely well behaved functions. For instance, with

We note at this point that the step function in (*) was built by approximating *f*(*x*) over each interval by the value of *f* at *x*=(*k*+1)/*n*, that is, the right-hand side of the interval. Had we chosen the middle of the interval instead, for a large class of functions (at least all twice differentiable functions) the error term of the algorithm would have been at least one order of magnitude better.

Instead of using linear polynomials, one can use polynomials of any degree, these algorithms are called *quadrature* algorithms.

Interpolation with polynomials on equally-spaced intervals, i.e. where the integration span is divided into equidistant subintervals, correspond to Newton-Cotes formulas. In this group you will find the *midpoint rule* with polynomial index 0, the *trapezoid rule* with index 1 and at index 2 a quadratic polynomial interpolating between three points at a time. Using quadratic polynomials leads to Simpson's rule.

If we lose the criteria of equally spaced intervals, we find another group of quadrature formulas, the best of which are known as Gaussian quadrature. (Another such quadrature formula is to interpolate using Chebyshev polynomials.) The error for such quadrature can be shown to be very small, if the function *f* is very smooth (if it has many derivatives.)

// This algorithm calculates the definite integral of a function // from 0 to 1, adaptively, by choosing smaller steps near // problematic points. // Set initial_h to the initial step size.Some details of the algorithm require careful thought. For many cases, estimating the error from quadrature over an interval for a function f isn't obvious. One popular solution is to use two different rules of quadrature, and use their difference as an estimate of the error from quadrature. The other problem is deciding what "too large" or "very small" signify. A possible criterion for "too large" is that the quadrature error should not be larger thanx:=0 h:=initial_h accumulator:=0 WHILE x<1.0 DO IF x+h>1.0 THEN h=1.0-x END IF IF error from quadrature over [x,x+h] for f is too large THEN Make h smaller ELSE accumulator:=accumulator + quadrature of f over [x,x+h] x:=x+h IF error from quadrature over [x,x+h] is very small THEN Make h larger END IF END IF END WHILE RETURN accumulator

Of course, more sophisticated heuristics are possible.

Let *f* have a bounded first derivative over [*a*,*b*]. The mean value theorem for *f*, where , gives

- (**)

This integration method can be combined with interval arithmetic to produce computer proofs and *verified* calculations.

**See also**: Runge-Kutta method, Numerical partial differential equations, Finite element method, Monte Carlo simulation.