Monte Carlo methods were originally practiced under more generic names such as "statistical sampling". The "Monte Carlo" designation is a reference to the famous casino in that area, and was popularized by early pioneers in the field such as Stanislaw Marcin Ulam, Enrico Fermi, John von Neumann and Nick Metropolis.

Random methods of computation have their roots in the pre-electronic computing era. Perhaps the most famous early use was by Fermi in 1930, when he used a random method to calculate the properties of the newly-discovered neutron. It's well known that Monte Carlo methods were central to the simulations required for the Manhattan Project. However it was only after electronic computers were first built (from 1945 on) that Monte Carlo methods began to be studied in depth.

Table of contents |

2 Optimisation 3 To be written 4 References and external links |

Deterministic methods of numerical integration operate by taking a number of evenly spaced samples from a function. In general, this works very well for functions of one variable. However, for functions of vectorss, deterministic quadrature methods can be very inefficient. To numerically integrate a two-dimensional vector, we would need equally spaced grid points over a two-dimensional surface, so say if you wanted a 10x10 grid, that would be 100 points. If the vector had 100 dimensions, the same spacing on the grid would require 10^{100} points — that's far too many to be computed. 100 dimensions is by no means unreasonable, since in many physical problems, a "dimension" is equivalent to a degree of freedom, and in a three dimensional simulation, there are at least three degrees of freedom per particle.

Monte Carlo methods provide a way out of this exponential time-increase. As long as the function in question is reasonably well-behaved, we can estimate it by randomly selecting points in 100-dimensional space, and taking some kind of "average". By the central limit theorem, we would expect this method to display "1/√*N* convergerence" -- i.e. quadrupling the number of sampled points will halve the error, regardless of the number of dimensions.

A refinement of this method is to somehow make the points random, but more likely to come from regions of high contribution to the integral than from regions of low contribution. In other words, the points should be drawn from a distribution similar in form to the integrand. Unfortunately, doing this precisely is just as hard as solving the integral in the first place, but there are approximate methods available: from simply making up an integrable function thought to be similar, to one of the adaptive routines discussed in the topics listed below.

- Direct sampling methods
- Importance sampling
- Stratified sampling
- Recursive stratified sampling
- VEGAS algorithm

- Random walk Monte Carlo including Markov chains

Another powerful and very popular application for random numbers in numerical simulation is in numerical optimisation. Here we have a function of some large-dimensional vector, and we wish to find the minimum of it. Many problems can be phrased in this way: for example a computer chess program could be seen as trying to find the optimal set of, say, 10 moves which produces the best evaluation function at the end. The traveling salesman problem is another optimisation problem.

Most Monte Carlo optimisation methods are based on random walks. Essentially, the program will move around a marker in multi-dimensional space, tending to move in directions which lead to a lower function, but sometimes moving against the gradient.

- Diffusion and quantum Monte Carlo
- Semiconductor charge transport and the like
- Quasi-random numbers and self avoiding walks
- Assorted random models, e.g. self-organised criticality

- P. Kevin MacKeown,
*Stochastic Simulation in Physics*, 1997, ISBN 981-3083-26-3