Statistical regularity has motivated the development of the relative frequency concept of probability. Most of the procedures commonly used to make statistical estimates or tests were developed by statisticians who used this concept exclusively. They are usually called **frequentists**, and their position is called **frequentism**. This school is often associated with the names of Jerzy Neyman and Egon Pearson who described the logic of statistical hypothesis testing. Other influential figures of the frequentist school include John Venn, R.A. Fisher, and Richard von Mises.

Since the 18th century, there has been a debate among statisticians featured the frequentists versus the Bayesianss. The former insisted that statistical procedures only made sense when one uses the relative frequency concept. The Bayesians supported the use of degrees of belief as a basis for statistical practice.

The frequentist position is the one you probably heard at school: perform an experiment lots of times, and measure the proportion where you get a positive result - this proportion, if you perform the experiment enough times, is the probability.

The problem comes in those cases where we haven't performed an experiment yet, or where there's no possible way an experiment could be performed - in these cases, frequentism can't help us. Also there's the *category problem*, which is normally expressed by asking questions like "what is the probability that the Sun will rise tomorrow? is it:"

- undefined, because we've never tested the sun to see if it will rise tomorrow.
- 1, because every time we've tested to see if the sun will rise in the past, it's risen.
- 1 - e, where e is the proportion of observable stars per day that go supernova

See also: Statistics -- statistical regularity -- probability axioms -- personal probability -- eclectic probability -- Probability -- Games of chance