A statistic used to estimate a parameter. The realized value of an estimator for a particular sample of data is called the estimate (or point estimate).
If the expected value of the statistic is equal to the parameter then it is described as being an unbiased estimator and the realized value is referred to as an unbiased estimate. If T is an estimator of the parameter θ and the expected value of T is θ+b, where b ≠ 0, then b is called the bias and the estimator is a biased estimator. If the bias tends to 0 as the sample size increases, then the estimator is described as being an asymptotically unbiased estimator.
The efficiency of an unbiased estimator is the ratio of its variance to the Cramér–Rao lower bound (see Fisher information). For an efficient estimator the ratio is 1. The relative efficiency of two unbiased estimators T and T ′ is given by the inverse ratio of their variances.
Comparisons involving biased estimators are often based on the mean squared error (MSE) defined to bewhere E(T) and Var(T) are, respectively, the expected value and variance of T. The root mean square error (RMSE) is the square root of the mean squared error and has the same units as the original data.
An estimator is said to be a consistent estimator if, for all positive c,where n is the sample size.
A sufficient estimator or sufficient statistic is a statistic that encapsulates all the information provided by the data that concerns the value of the unknown parameter.
The terms ‘biased’ and ‘unbiased’ appear in an 1897 text by Bowley. The terms ‘efficiency’, ‘estimate’, ‘estimation’, and ‘sufficiency’ were introduced by Sir Ronald Fisher in 1922. The term ‘estimator’ was introduced in a specialized sense by Pitman in 1939.