Multivariate normal distribution


In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of correlated real-valued random variables, each of which clusters around a mean value.

Definitions

Notation and parametrization

The multivariate normal distribution of a k-dimensional random vector can be written in the following notation:
or to make it explicitly known that is k-dimensional,
with k-dimensional mean vector
and covariance matrix
such that and. The inverse of the covariance matrix is called the precision matrix, denoted by.

Standard normal random vector

A real random vector is called a standard normal random vector if all of its components are independent and each is a zero-mean unit-variance normally distributed random variable, i.e. if for all.

Centered normal random vector

A real random vector is called a centered normal random vector if there exists a matrix such that has the same distribution as where is a standard normal random vector with components.

Normal random vector

A real random vector is called a normal random vector if there exists a random -vector, which is a standard normal random vector, a -vector, and a matrix, such that.
Formally:
Here the covariance matrix is.
In the degenerate case where the covariance matrix is singular, the corresponding distribution has no density; see the [|section below] for details. This case arises frequently in statistics; for example, in the distribution of the vector of residuals in the ordinary least squares regression. The are in general not independent; they can be seen as the result of applying the matrix to a collection of independent Gaussian variables.

Equivalent definitions

The following definitions are equivalent to the definition given above. A random vector has a multivariate normal distribution if it satisfies one of the following equivalent conditions.
  • Every linear combination of its components is normally distributed. That is, for any constant vector, the random variable has a univariate normal distribution, where a univariate normal distribution with zero variance is a point mass on its mean.
  • There is a k-vector and a symmetric, positive semidefinite matrix, such that the characteristic function of is
The spherical normal distribution can be characterised as the unique distribution where components are independent in any orthogonal coordinate system.

Density function

Non-degenerate case

The multivariate normal distribution is said to be "non-degenerate" when the symmetric covariance matrix is positive definite. In this case the distribution has density
where is a real k-dimensional column vector and is the determinant of, also known as the generalized variance. The equation above reduces to that of the univariate normal distribution if is a matrix.
The circularly symmetric version of the complex normal distribution has a slightly different form.
Each iso-density locus — the locus of points in k-dimensional space each of which gives the same particular value of the density — is an ellipse or its higher-dimensional generalization; hence the multivariate normal is a special case of the elliptical distributions.
The quantity is known as the Mahalanobis distance, which represents the distance of the test point from the mean.
The squared Mahalanobis distance
is decomposed into a sum of k terms, each term being a product of three meaningful components.
Note that in the case when, the distribution reduces to a univariate normal distribution and the Mahalanobis distance reduces to the absolute value of the standard score. See also [|Interval] below.

Bivariate case

In the 2-dimensional nonsingular case, the probability density function of a vector is:
where is the correlation between and and
where and. In this case,
In the bivariate case, the first equivalent condition for multivariate reconstruction of normality can be made less restrictive as it is sufficient to verify that a countably infinite set of distinct linear combinations of and are normal in order to conclude that the vector of is bivariate normal.
The bivariate iso-density loci plotted in the -plane are ellipses, whose principal axes are defined by the eigenvectors of the covariance matrix .
As the absolute value of the correlation parameter increases, these loci are squeezed toward the following line :
This is because this expression, with replaced by, is the best linear unbiased prediction of given a value of.

Degenerate case

If the covariance matrix is not full rank, then the multivariate normal distribution is degenerate and does not have a density. More precisely, it does not have a density with respect to k-dimensional Lebesgue measure. Only random vectors whose distributions are absolutely continuous with respect to a measure are said to have densities. To talk about densities but avoid dealing with measure-theoretic complications it can be simpler to restrict attention to a subset of of the coordinates of such that the covariance matrix for this subset is positive definite; then the other coordinates may be thought of as an affine function of these selected coordinates.
To talk about densities meaningfully in singular cases, then, we must select a different base measure. Using the disintegration theorem we can define a restriction of Lebesgue measure to the -dimensional affine subspace of where the Gaussian distribution is supported, i.e.. With respect to this measure the distribution has the density of the following motif:
where is the generalized inverse and is the pseudo-determinant.

Cumulative distribution function

The notion of cumulative distribution function in dimension 1 can be extended in two ways to the multidimensional case, based on rectangular and ellipsoidal regions.
The first way is to define the cdf of a random vector as the probability that all components of are less than or equal to the corresponding values in the vector :
Though there is no closed form for, there are a number of algorithms that estimate it numerically.
Another way is to define the cdf as the probability that a sample lies inside the ellipsoid determined by its Mahalanobis distance from the Gaussian, a direct generalization of the standard deviation.
In order to compute the values of this function, closed analytic formula exist, as follows.

Interval

The interval for the multivariate normal distribution yields a region consisting of those vectors x satisfying
Here is a -dimensional vector, is the known -dimensional mean vector, is the known covariance matrix and is the quantile function for probability of the chi-squared distribution with degrees of freedom.
When the expression defines the interior of an ellipse and the chi-squared distribution simplifies to an exponential distribution with mean equal to two.

Complementary cumulative distribution function (tail distribution)

The complementary cumulative distribution function or the tail distribution
is defined as.
When, then
the ccdf can be written as a probability the maximum of dependent Gaussian variables:
While no simple closed formula exists for computing the ccdf, the maximum of dependent Gaussian variables can
be estimated accurately via the Monte Carlo method.

Properties

Moments

The kth-order moments of x are given by
where
The kth-order central moments are as follows
where the sum is taken over all allocations of the set into λ pairs. That is, for a kth central moment, one sums the products of covariances :
This yields terms in the sum, each being the product of λ covariances. For fourth order moments there are three terms. For sixth-order moments there are terms, and for eighth-order moments there are terms.
The covariances are then determined by replacing the terms of the list by the corresponding terms of the list consisting of r1 ones, then r2 twos, etc.. To illustrate this, examine the following 4th-order central moment case:
where is the covariance of Xi and Xj. With the above method one first finds the general case for a kth moment with k different X variables,, and then one simplifies this accordingly. For example, for, one lets and one uses the fact that.

Functions of a normal vector

A quadratic form of a normal vector, , is a generalized chi-squared variable. The direction of a normal vector follows a projected normal distribution.
If is a general scalar-valued function of a normal vector, its probability density function, cumulative distribution function, and inverse cumulative distribution function can be computed with the numerical method of ray-tracing.

Likelihood function

If the mean and covariance matrix are known, the log likelihood of an observed vector is simply the log of the probability density function:
The circularly symmetric version of the noncentral complex case, where is a vector of complex numbers, would be
i.e. with the conjugate transpose replacing the normal transpose. This is slightly different than in the real case, because the circularly symmetric version of the complex normal distribution has a slightly different form for the normalization constant.
A similar notation is used for multiple linear regression.
Since the log likelihood of a normal vector is a quadratic form of the normal vector, it is distributed as a generalized chi-squared variable.