Main Page | See live article | Alphabetical index

Probabilistic algorithm

This is not about the probabilistic method, a nonconstructive method of proof in mathematics, nor about Monte Carlo methods, i.e., simulations relying on pseudo-randomness.

In mathematics, a probabilistic algorithm is an algorithm that with very high probability gives a correct answer, but not with certainty. Such an algorithm may run much faster than one that is sure to give the right answer in every case, but it may also take longer than such an algorithm.

One such algorithm, the Miller-Rabin primality test, relies on a binary relation between two positive integers k and n that can be expressed by saying that k "is a witness to the compositeness of" n. It can be shown that

(i.e., n is not prime), and If one randomly chooses 100 numbers less than a composite number n, then the probability of failing to find such a "witness" is 1 − (3/4)100, so that for most practical purposes, this is a good primality test. If n is big, there may be no other test that is practical. The probability of error can be reduced to an arbitrary degree by performing enough independent tests.

If, using such a method, the probability of error is 2−1000, the philosophical question arises: is this a proof? After all the probability of error is distinctly smaller than the probability of an error in the reader's computer, or the reader themselves making an error in reading a proof - what does it mean in real terms to consider this small a probability?

If that does not seem extreme enough to be perplexing, consider a proof with an error probability of 2−1000000: the user only has to leave the computer running the probabilistic algorithm running a little longer. At this level, the odds against error are not only astronomically, but also cosmologically vast.

See also: RP.