Main Page | See live article | Alphabetical index

Correlation

In probability theory and statistics, the correlation, also called correlation coefficient, between two random variables is found by dividing their covariance by the product of their standard deviations. (It is defined only if these standard deviations are finite.) It is a corollary of the Cauchy-Schwarz inequality that the correlation cannot exceed 1 in absolute value.

The correlation is 1 in the case of an increasing linear relationship, -1 in the case of a decreasing linear relationship, and some value in between in all other cases, indicating the degree of linear dependence between the variables.

If the variables are independent then the correlation is 0, but the converse is not true because the correlation coefficient detects only linear dependencies between two variables. Here is an example: Suppose the random variable X is uniformly distributed on the interval from -1 to 1, and Y = X2. Then Y is completely determined by X, so that X and Y are as far from being independent as two random variables can be, but their correlation is zero; they are uncorrelated.

If several values of X and Y have been measured, then the Pearson product-moment correlation coefficient can be used to estimate the correlation of X and Y. The coefficient is especially important if X and Y are both normally distributed and follow the linear regression model.

Pearson's correlation coefficient is a parametric statistic, and it may be less useful if the underlying assumption of normality is violated. Less powerful non-parametric correlation methods, such as Spearman's ρ may be useful when distributions are not normal.

To get a measure for more general dependencies in the data (also nonlinear) it is better to use so called correlation ratio which is able to detect almost any functional dependency, or mutual information which detects even more general dependencies.