Two types of estimators: point estimators, and interval estimators.

For a point estimator **θ** of parameter θ:

- The
*bias*of**θ**is defined as B(**θ**) = E[**θ**] − θ -
**θ**is an*unbiased estimator*of θ iff B(**θ**) = 0 for all θ - The
*mean square error*of**θ**is defined as MSE(**θ**) = E[(**θ**− θ)^{2}] - MSE(
**θ**) = V(**θ**) + (B(**θ**))^{2} - The standard deviation of
**θ**is also called the*standard error*of**θ**.

Occasionally one chooses the unbiased estimator with the lowest variance. Sometimes it is preferable not to limit oneself to unbiased estimators; see Bias (statistics). Concerning such "best unbiased estimators", see also Gauss-Markov theorem, Lehmann-Scheffé theorem, Rao-Blackwell theorem.

See also Maximum likelihood.