Similarly, when we assert that two random variables are independent, we intuitively mean that knowing something about the value of one of them does not yield any information about the value of the other. For instance, the height of a person and their IQ are independent random variables.
Another typical example of two independent variables is given by repeating an experiment: roll a die twice, let *X* be the number you get the first time, and *Y* the number you get the second time. These two variables are independent.

We define two events *E*_{1} and *E*_{2} of a probability space to be *independent* iff

*P*(*E*_{1}∩*E*_{2}) =*P*(*E*_{1}) ·*P*(*E*_{2}).

If P(*E*_{2}) ≠ 0, then the independence of *E*_{1} and *E*_{2} can also be expressed with conditional probabilities:

- P(
*E*_{1}|*E*_{2}) = P(*E*_{1})

If we have more than two events, then pairwise independence is insufficient to capture the intuitive sense of independence.
So a set *S* of events is said to be independent if every finite nonempty subset { *E*_{1}, ..., *E*_{n} } of *S* satisfies

*P*(*E*_{1}∩ ... ∩*E*_{n}) =*P*(*E*_{1}) · ... ·*P*(*E*_{n}).

We define random variables *X* and *Y* to be independent if

If *X* and *Y* are independent, then the expectation operator has the nice property

- E[
*X*·*Y*] = E[*X*] · E[*Y*]

- Var(
*X*+*Y*) = Var(*X*) + Var(*Y*).

*f*_{XY}(*x*,*y*)d*x*d*y*=*f*_{X}(*x*)d*x**f*_{Y}(*y*)d*y*.*Still need to deal with independence of sets of more than 2 random variables.*