The notion of an *independent variable* often (but not always) implies the ability to choose the levels of the independent variable and that the dependent variable will respond naturally as in the stimulus-response model. The independent variable `x` may be a scalar or a vector. In the former case we may write one of the simplest linear-regression models as follows:

Historically, in applications to measurements in astronomy, the "error" was actually a random measurement error, but in many applications, ε is merely the amount by which the individual -value differs from the average -value among individuals having the same -value. The average value of the random "error" is zero. Often in linear regression problems statisticians rely on the Gauss-Markov assumptions:

- The random errors have expected value 0.
- The random errors are uncorrelated (this is weaker than an assumption of probabilistic independence).
- The random errors are "homoscedastic", i.e., they all have the same variance.

Sometimes stronger assumptions are relied on:

- The random errors have expected value 0.
- They are independent.
- They are normally distributed.
- They all have the same variance.

It is often erroneously thought that the reason the technique is called "linear regression" is that the graph of is a line. But in fact, if the model is

A statistician will usually **estimate** the unobservable values of the parameters α and β by the **method of least squares**, which consists of finding the values of and that minimize the sum of squares of the **residuals**

Notice that, whereas the errors are independent, the residuals cannot be independent because the use of least-squares estimates implies that the sum of the residuals must be 0, and the dot-product of the vector of residuals with the vector of -values must be 0, i.e., we must have

These facts make it possible to use Student's t-distribution with degrees of freedom (so named in honor of the pseudonymous "Student") to find confidence intervals for and .

Denote by capital `Y` the column vector whose `i`th entry is `y _{i}`, and by capital

Then it can be shown that

The matrix `I _{n} - X (X' X)^{-1} X'` that appears above is a symmetric idempotent matrix of rank

*Note: A useful alternative to linear regression is robust regression in which mean absolute error is minimized instead of mean squared error as in linear regression. Robust regression is computationally much more intensive than linear regression and is somewhat more difficult to implement as well.*

Table of contents |

2 Estimating beta 3 Estimating alpha 4 Displaying the residuals 5 Ancillary statistics |

We use the summary statistics above to calculate `b`, the estimate of `beta`.

We use the estimate of beta and the other statistics to estimate alpha by:

The first method of displaying the residuals use the histogram or cumulative distribution to depict the similarity (or lack thereof) to a normal distribution. Non-normality suggests that the model may not be a good summary description of the data.

We plot the residuals, against the independent variable, `X`. There should be no discernible trend or pattern if the model is satisfactory for this data. Some of the possible problems are:

- Residuals increase (or decrease) as the independent variable increases -- indicates mistakes in the calculations -- find the mistakes and correct them.
- Residuals first rise and then fall (or first fall and then rise) -- indicates that the appropriate model is (at least) quadratic. See polynomial regression.
- One residual is much larger than the others and opposite in sign -- suggests that there is one unusual observation which is distorting the fit --
- Verify its value before publishing
*or* - Eliminate it, document your decision to do so, and recalculate the statistics.

- Verify its value before publishing

The sum of squared deviations can be partitioned as in ANOVA to indicate what part of the dispersion of the dependent variable is explained by the independent variable.

The **correlation coefficient**, `r`, can be calculated by