Efficiency (statistics)


In statistics, efficiency is a measure of quality of an estimator, of an experimental design, or of a hypothesis testing procedure. Essentially, a more efficient estimator needs fewer input data or observations than a less efficient one to achieve the Cramér–Rao bound.
An efficient estimator is characterized by having the smallest possible variance, indicating that there is a small deviance between the estimated value and the "true" value in the L2 norm sense.
The relative efficiency of two procedures is the ratio of their efficiencies, although often this concept is used where the comparison is made between a given procedure and a notional "best possible" procedure. The efficiencies and the relative efficiency of two procedures theoretically depend on the sample size available for the given procedure, but it is often possible to use the asymptotic relative efficiency as the principal comparison measure.

Estimators

The efficiency of an unbiased estimator, T, of a parameter θ is defined as
where is the Fisher information of the sample. Thus e is the minimum possible variance for an unbiased estimator divided by its actual variance. The Cramér–Rao bound can be used to prove that e ≤ 1.

Efficient estimators

An efficient estimator is an estimator that estimates the quantity of interest in some “best possible” manner. The notion of “best possible” relies upon the choice of a particular loss function — the function which quantifies the relative degree of undesirability of estimation errors of different magnitudes. The most common choice of the loss function is quadratic, resulting in the mean squared error criterion of optimality.
In general, the spread of an estimator around the parameter θ is a measure of estimator efficiency and performance. This performance can be calculated by finding the mean squared error. More formally, let T be an estimator for the parameter θ. The mean squared error of T is the value, which can be decomposed as a sum of its variance and bias:
An estimator T1 performs better than an estimator T2 if. For a more specific case, if T1 and T2 are two unbiased estimators for the same parameter θ, then the variance can be compared to determine performance. In this case, T2 is more efficient than T1 if the variance of T2 is smaller than the variance of T1, i.e. for all values of θ. This relationship can be determined by simplifying the more general case above for mean squared error; since the expected value of an unbiased estimator is equal to the parameter value,. Therefore, for an unbiased estimator,, as the term drops out for being equal to 0.
If an unbiased estimator of a parameter θ attains for all values of the parameter, then the estimator is called efficient.
Equivalently, the estimator achieves equality in the Cramér–Rao inequality for all θ. The Cramér–Rao lower bound is a lower bound of the variance of an unbiased estimator, representing the "best" an unbiased estimator can be.
An efficient estimator is also the minimum variance unbiased estimator. This is because an efficient estimator maintains equality on the Cramér–Rao inequality for all parameter values, which means it attains the minimum variance for all parameters. The MVUE estimator, even if it exists, is not necessarily efficient, because "minimum" does not mean equality holds on the Cramér–Rao inequality.
Thus an efficient estimator need not exist, but if it does, it is the MVUE.

Finite-sample efficiency

Suppose is a parametric model and are the data sampled from this model. Let be an estimator for the parameter θ. If this estimator is unbiased, then the Cramér–Rao inequality states the variance of this estimator is bounded from below:
where is the Fisher information matrix of the model at point θ. Generally, the variance measures the degree of dispersion of a random variable around its mean. Thus estimators with small variances are more concentrated, they estimate the parameters more precisely. We say that the estimator is a finite-sample efficient estimator if it reaches the lower bound in the Cramér–Rao inequality above, for all. Efficient estimators are always minimum variance unbiased estimators. However the converse is false: There exist point-estimation problems for which the minimum-variance mean-unbiased estimator is inefficient.
Historically, finite-sample efficiency was an early optimality criterion. However this criterion has some limitations:
  • Finite-sample efficient estimators are extremely rare. In fact, it was proved that efficient estimation is possible only in an exponential family, and only for the natural parameters of that family.
  • This notion of efficiency is sometimes restricted to the class of unbiased estimators. Since there are no good theoretical reasons to require that estimators are unbiased, this restriction is inconvenient. In fact, if we use mean squared error as a selection criterion, many biased estimators will slightly outperform the “best” unbiased ones. For example, in multivariate statistics for dimension three or more, the mean-unbiased estimator, sample mean, is inadmissible: Regardless of the outcome, its performance is worse than for example the James–Stein estimator.
  • Finite-sample efficiency is based on the variance, as a criterion according to which the estimators are judged. A more general approach is to use loss functions other than quadratic ones, in which case the finite-sample efficiency can no longer be formulated.
As an example, among the models encountered in practice, efficient estimators exist for: the mean μ of the normal distribution, parameter λ of the Poisson distribution, the probability p in the binomial or multinomial distribution.
Consider the model of a normal distribution with unknown mean but known variance: The data consists of n independent and identically distributed observations from this model:. We estimate the parameter θ using the sample mean of all observations:
This estimator has mean θ and variance of, which is equal to the reciprocal of the Fisher information from the sample. Thus, the sample mean is a finite-sample efficient estimator for the mean of the normal distribution.

Asymptotic efficiency

Asymptotic efficiency requires Consistency, asymptotically normal distribution of the estimator, and an asymptotic variance-covariance matrix no worse than that of any other estimator.

Example: Median

Consider a sample of size drawn from a normal distribution of mean and unit variance, i.e.,
The sample mean,, of the sample, defined as
The variance of the mean, 1/N is equal to the reciprocal of the Fisher information from the sample and thus, by the Cramér–Rao inequality, the sample mean is efficient in the sense that its efficiency is unity.
Now consider the sample median,. This is an unbiased and consistent estimator for. For large the sample median is approximately normally distributed with mean and variance
The efficiency of the median for large is thus
In other words, the relative variance of the median will be, or 57% greater than the variance of the mean – the standard error of the median will be 25% greater than that of the mean.
Note that this is the asymptotic efficiency — that is, the efficiency in the limit as sample size tends to infinity. For finite values of the efficiency is higher than this.
The sample mean is thus more efficient than the sample median in this example. However, there may be measures by which the median performs better. For example, the median is far more robust to outliers, so that if the Gaussian model is questionable or approximate, there may advantages to using the median.

Dominant estimators

If and are estimators for the parameter, then is said to dominate if:
  1. its mean squared error is smaller for at least some value of
  2. the MSE does not exceed that of for any value of θ.
Formally, dominates if
holds for all, with strict inequality holding somewhere.

Relative efficiency

The relative efficiency of two unbiased estimators is defined as
Although is in general a function of, in many cases the dependence drops out; if this is so, being greater than one would indicate that is preferable, regardless of the true value of.
An alternative to relative efficiency for comparing estimators, is the Pitman closeness criterion. This replaces the comparison of mean-squared-errors with comparing how often one estimator produces estimates closer to the true value than another estimator.

Estimators of the mean of u.i.d. variables

In estimating the mean of uncorrelated, identically distributed variables we can take advantage of the fact that the variance of the sum is the sum of the variances. In this case efficiency can be defined as the square of the coefficient of variation, i.e.,
Relative efficiency of two such estimators can thus be interpreted as the relative sample size of one required to achieve the certainty of the other. Proof:
Now because we have, so the relative efficiency expresses the relative sample size of the first estimator needed to match the variance of the second.

Robustness

Efficiency of an estimator may change significantly if the distribution changes, often dropping. This is one of the motivations of robust statistics – an estimator such as the sample mean is an efficient estimator of the population mean of a normal distribution, for example, but can be an inefficient estimator of a mixture distribution of two normal distributions with the same mean and different variances. For example, if a distribution is a combination of 98% N and 2% N, the presence of extreme values from the latter distribution significantly reduces the efficiency of the sample mean as an estimator of μ. By contrast, the trimmed mean is less efficient for a normal distribution, but is more robust by changes in the distribution, and thus may be more efficient for a mixture distribution. Similarly, the shape of a distribution, such as skewness or heavy tails, can significantly reduce the efficiency of estimators that assume a symmetric distribution or thin tails.