In mathematics, a **probabilistic algorithm** is an algorithm that with very high probability gives a correct answer, but not with certainty. Such an algorithm may run much faster than one that is sure to give the right answer in every case, but it may also take longer than such an algorithm.

One such algorithm, the Miller-Rabin primality test, relies on a binary relation between two positive integers *k* and *n* that can be expressed by saying that *k* "is a witness to the compositeness of" *n*. It can be shown that

- If there is a witness to the compositeness of
*n*, then*n*is composite

- If
*n*is composite then at least three-fourths of the natural numbers less than*n*are witnesses to its compositeness, and - There is a fast algorithm that, given
*k*and*n*, ascertains whether*k*is a witness to the compositeness of*n*.

If, using such a method, the probability of error is 2^{−1000}, the philosophical question arises: is this a proof? After all the probability of error is distinctly smaller than the probability of an error in the reader's computer, or the reader themselves making an error in reading a proof - what does it mean in real terms to consider this small a probability?

If that does not seem extreme enough to be perplexing, consider a proof with an error probability of 2^{−1000000}: the user only has to leave the computer running the probabilistic algorithm running a little longer. At this level, the odds against error are not only astronomically, but also *cosmologically* vast.

See also: RP.