We can think of a **random variable** as the numeric result of operating a non-deterministic mechanism or performing a non-deterministic experiment to generate a random result. For example, rolling a die and recording the outcome yields a random variable with range {1,2,3,4,5,6}. Picking a random person and measuring their height yields another random variable.

Mathematically, a random variable is defined as a measurable function from a probability space to some measurable space. This measurable space is the space of possible values of the variable, and it is usually taken to be the real numbers with the Borel σ-algebra, and we will always assume this in this encyclopedia, unless otherwise specified.

Table of contents |

2 Functions of random variables 3 Moments 4 Convergence |

Recording all these probabilities of output ranges of a real-valued random variable *X* yields the probability distribution of *X*. The probability distribution "forgets" about the particular probability space used to define *X* and only records the probabilities of various values of *X*. Such a probability distribution can always be captured by its cumulative distribution function

*F*(x) = P(X≤x)_{X}

If we have a random variable *X* on Ω and a measurable function *f*:**R**->**R**, then *Y*=*f*(*X*) will also be a random variable on Ω, since the composition of measurable functions is measurable. The same procedure that allowed one to go from a probability space (Ω,P) to (**R**,dF_{X}) can be used to obtain the probability distribution of *Y*.
The cumulative distribution function of *Y* is

- F
_{Y}(*y*) = Prob(*f*(*X*)≤y).

Let *X* be a real-valued random variable and let *Y* = *X*^{2}. Then,

- F
_{Y}(*y*) = Prob(*X*^{2}≤y).

- F
_{Y}(*y*) = 0 if*y*<0.

- F
_{Y}(*y*) = F_{X}(√\*y*)-F_{X}(-√*y*) if*y*≥0.

The probability distribution of random variable is often characterised by a small number of parameters, which also have a practical interpretation. For example, it is often enough to know what its "average value" is. This is captured by the mathematical concept of expected value of a random variable, denoted E[*X*]. Note that in general, E[*f*(*X*)] is **not** the same as *f*(E[*X*]). Once the "average value" is known, one could then ask how far from this average value the values of *X* typically are, a question that is answered by the variance and standard deviation of a random variable.

Mathematically, this is known as the (generalised) problem of moments: for a given class of random variables *X*, find a collection {*f _{i}*} of functions such that the expectation values E[

Much of mathematical statistics consists in proving convergence results for certain sequences of random variables; see for instance the law of large numbers and the central limit theorem.

There are various senses in which a sequence (*X*_{n}) of random variables can converge to a random variable *X*. These are explained in the article on convergence of random variables.

See also: discrete random variable, continuous random variable, probability distribution, randomness, random vector, random function, generating function