Maximum likelihood estimation
In statistics, maximum likelihood estimation is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.
If the likelihood function is differentiable, the derivative test for finding maxima can be applied. In some cases, the first-order conditions of the likelihood function can be solved analytically; for instance, the ordinary least squares estimator for a linear regression model maximizes the likelihood when the random errors are assumed to have normal distributions with the same variance.
From the perspective of Bayesian inference, MLE is generally equivalent to maximum a posteriori estimation with a prior distribution that is uniform in the region of interest. In frequentist inference, MLE is a special case of an extremum estimator, with the objective function being the likelihood.
Principles
We model a set of observations as a random sample from an unknown joint probability distribution which is expressed in terms of a set of parameters. The goal of maximum likelihood estimation is to determine the parameters for which the observed data have the highest joint probability. We write the parameters governing the joint distribution as a vector so that this distribution falls within a parametric family where is called the parameter space, a finite-dimensional subset of Euclidean space. Evaluating the joint density at the observed data sample gives a real-valued function,which is called the likelihood function. For independent random variables, will be the product of univariate density functions:
The goal of maximum likelihood estimation is to find the values of the model parameters that maximize the likelihood function over the parameter space, that is:
Intuitively, this selects the parameter values that make the observed data most probable. The specific value that maximizes the likelihood function is called the maximum likelihood estimate. Further, if the function so defined is measurable, then it is called the maximum likelihood estimator. It is generally a function defined over the sample space, i.e. taking a given sample as its argument. A sufficient but not necessary condition for its existence is for the likelihood function to be continuous over a parameter space that is compact. For an open the likelihood function may increase without ever reaching a supremum value.
In practice, it is often convenient to work with the natural logarithm of the likelihood function, called the log-likelihood:
Since the logarithm is a monotonic function, the maximum of occurs at the same value of as does the maximum of If is differentiable in necessary conditions for the occurrence of a maximum are
known as the likelihood equations. For some models, these equations can be explicitly solved for but in general no closed-form solution to the maximization problem is known or available, and an MLE can only be found via numerical optimization. Another problem is that in finite samples, there may exist multiple roots for the likelihood equations. Whether the identified root of the likelihood equations is indeed a maximum depends on whether the matrix of second-order partial and cross-partial derivatives, the so-called Hessian matrix
is negative semi-definite at, as this indicates local concavity. Conveniently, most common probability distributions – in particular the exponential family – are logarithmically concave.
Restricted parameter space
While the domain of the likelihood function—the parameter space—is generally a finite-dimensional subset of Euclidean space, additional restrictions sometimes need to be incorporated into the estimation process. The parameter space can be expressed aswhere is a vector-valued function mapping into Estimating the true parameter belonging to then, as a practical matter, means to find the maximum of the likelihood function subject to the constraint
Theoretically, the most natural approach to this constrained optimization problem is the method of substitution, that is "filling out" the restrictions to a set in such a way that is a one-to-one function from to itself, and reparameterize the likelihood function by setting Because of the equivariance of the maximum likelihood estimator, the properties of the MLE apply to the restricted estimates also. For instance, in a multivariate normal distribution the covariance matrix must be positive-definite; this restriction can be imposed by replacing where is a real upper triangular matrix and is its transpose.
In practice, restrictions are usually imposed using the method of Lagrange which, given the constraints as defined above, leads to the restricted likelihood equations
and
where is a column-vector of Lagrange multipliers and is the Jacobian matrix of partial derivatives. Naturally, if the constraints are not binding at the maximum, the Lagrange multipliers should be zero. This in turn allows for a statistical test of the "validity" of the constraint, known as the Lagrange multiplier test.
Nonparametric maximum likelihood estimation
Nonparametric maximum likelihood estimation can be performed using the empirical likelihood.Properties
A maximum likelihood estimator is an extremum estimator obtained by maximizing, as a function of θ, the objective function. If the data are independent and identically distributed, then we havethis being the sample analogue of the expected log-likelihood, where this expectation is taken with respect to the true density.
Maximum-likelihood estimators have no optimum properties for finite samples, in the sense that other estimators may have greater concentration around the true parameter-value. However, like other estimation methods, maximum likelihood estimation possesses a number of attractive limiting properties: As the sample size increases to infinity, sequences of maximum likelihood estimators have these properties:
- Consistency: the sequence of MLEs converges in probability to the value being estimated.
- Equivariance: If is the maximum likelihood estimator for, and if is a bijective transform of, then the maximum likelihood estimator for is. The equivariance property can be generalized to non-bijective transforms, although it applies in that case on the maximum of an induced likelihood function which is not the true likelihood in general.
- Efficiency, i.e. it achieves the Cramér–Rao lower bound when the sample size tends to infinity. This means that no consistent estimator has lower asymptotic mean squared error than the MLE, which also means that MLE has asymptotic normality.
- Second-order efficiency after correction for bias.
Consistency
Under slightly stronger conditions, the estimator converges almost surely :
In practical applications, data is never generated by. Rather, is a model, often in idealized form, of the process generated by the data. It is a common aphorism in statistics that all models are wrong. Thus, true consistency does not occur in practical applications. Nevertheless, consistency is often considered to be a desirable property for an estimator to have.
To establish consistency, the following conditions are sufficient.
The dominance condition can be employed in the case of i.i.d. observations. In the non-i.i.d. case, the uniform convergence in probability can be checked by showing that the sequence is stochastically equicontinuous.
If one wants to demonstrate that the ML estimator converges to θ0 almost surely, then a stronger condition of uniform convergence almost surely has to be imposed:
Additionally, if the data were generated by, then under certain conditions, it can also be shown that the maximum likelihood estimator converges in distribution to a normal distribution. Specifically,
where is the Fisher information matrix.
Functional invariance
The maximum likelihood estimator selects the parameter value which gives the observed data the largest possible probability. If the parameter consists of a number of components, then we define their separate maximum likelihood estimators, as the corresponding component of the MLE of the complete parameter. Consistent with this, if is the MLE for, and if is any transformation of, then the MLE for is by definitionIt maximizes the so-called profile likelihood:
The MLE is also equivariant with respect to certain transformations of the data. If where is one to one and does not depend on the parameters to be estimated, then the density functions satisfy
and hence the likelihood functions for and differ only by a factor that does not depend on the model parameters.
For example, the MLE parameters of the log-normal distribution are the same as those of the normal distribution fitted to the logarithm of the data. In fact, in the log-normal case if, then follows a log-normal distribution. The density of Y follows with standard Normal and, for.
Efficiency
As assumed above, if the data were generated by then under certain conditions, it can also be shown that the maximum likelihood estimator converges in distribution to a normal distribution. It is -consistent and asymptotically efficient, meaning that it reaches the Cramér–Rao bound. Specifically,where is the Fisher information matrix:
In particular, it means that the bias of the maximum likelihood estimator is equal to zero up to the order.