Cholesky decomposition


In linear algebra, the Cholesky decomposition or Cholesky factorization is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g., Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices, and posthumously published in 1924.
When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations.

Statement

The Cholesky decomposition of a Hermitian positive-definite matrix is a decomposition of the form
where is a lower triangular matrix with real and positive diagonal entries, and * denotes the conjugate transpose of. Every Hermitian positive-definite matrix has a Cholesky decomposition and the lower triangular matrix is unique if we impose the diagonal to be strictly positive.
The converse holds trivially: if can be written as for some invertible, lower triangular or otherwise, then is Hermitian and positive definite.
When is a real matrix, the factorization may be written
where is a real lower triangular matrix with positive diagonal entries.

Positive semidefinite matrices

If a Hermitian matrix is only positive semidefinite, instead of positive definite, then it still has a decomposition of the form where the diagonal entries of are allowed to be zero.
The decomposition need not be unique, for example:
for any. However, if the rank of is, then there is a unique lower triangular with exactly positive diagonal elements and columns containing all zeroes.
Alternatively, the decomposition can be made unique when a pivoting choice is fixed. Formally, if is an positive semidefinite matrix of rank, then there is at least one permutation matrix such that has a unique decomposition of the form with
where is an lower triangular matrix with positive diagonal.

LDL decomposition

A closely related variant of the classical Cholesky decomposition is the LDL decomposition,
where is a lower unit triangular matrix, and is a diagonal matrix. That is, the diagonal elements of are required to be 1 at the cost of introducing an additional diagonal matrix in the decomposition. The main advantage is that the LDL decomposition can be computed and used with essentially the same algorithms, but avoids extracting square roots.
For this reason, the LDL decomposition is often called the square-root-free Cholesky decomposition. For real matrices, the factorization has the form and is often referred to as decomposition. It is reminiscent of the eigendecomposition of real symmetric matrices,, but is quite different in practice because and are not similar matrices.
The LDL decomposition is related to the classical Cholesky decomposition of the form as follows:
Conversely, given the classical Cholesky decomposition of a positive definite matrix, if is a diagonal matrix that contains the main diagonal of, then can be decomposed as where
,
If is positive definite then the diagonal elements of are all positive.
For positive semidefinite, an decomposition exists where the number of non-zero elements on the diagonal is exactly the rank of.
Some indefinite matrices for which no Cholesky decomposition exists have an LDL decomposition with negative entries in : it suffices that the first leading principal minors of are non-singular.

Example

Here is the Cholesky decomposition of a symmetric real matrix:
And here is its LDLT decomposition:

Geometric interpretation

The Cholesky decomposition is equivalent to a particular choice of conjugate axes of an ellipsoid. In detail, let the ellipsoid be defined as, then by definition, a set of vectors are conjugate axes of the ellipsoid iff. Then, the ellipsoid is preciselywhere maps the basis vector, and is the unit sphere in n dimensions. That is, the ellipsoid is a linear image of the unit sphere.
Define the matrix, then is equivalent to. Different choices of the conjugate axes correspond to different decompositions.
The Cholesky decomposition corresponds to choosing to be parallel to the first axis, to be within the plane spanned by the first two axes, and so on. This makes an upper-triangular matrix. Then, there is, where is lower-triangular.
Similarly, principal component analysis corresponds to choosing to be perpendicular. Then, let and, and there is where is an orthogonal matrix. This then yields.

Applications

Numerical solution of system of linear equations

The Cholesky decomposition is mainly used for the numerical solution of linear equations. If is symmetric and positive definite, then can be solved by first computing the Cholesky decomposition , then solving for by forward substitution, and finally solving for by back substitution.
An alternative way to eliminate taking square roots in the decomposition is to compute the LDL decomposition, then solving for, and finally solving.
For linear systems that can be put into symmetric form, the Cholesky decomposition is the method of choice, for superior efficiency and numerical stability. Compared to the LU decomposition, it is roughly twice as efficient.

Linear least squares

In linear least squares problem one seeks a solution of an over-determined system, such that quadratic norm of the residual vector is minimum. This may be accomplished by solving by Cholesky decomposition
normal equations, where is symmetric positive definite. Symmetric equation matrix
may also come from an energy functional, which must be positive from physical considerations; this happens frequently in the numerical solution of partial differential equations.
Such method is economic and works well in many applications, however it fails for near singular
. This is best illustrated in pathological case of square,
where determinant of is square of that of the original system.
Then it is best to apply SVD or QR decomposition. Givens QR has the advantage that similarly
to normal equations there is no need to keep the whole matrix as it is
possible to update Cholesky factor with consecutive rows of.

Non-linear optimization

are a particular case of nonlinear optimization. Let be an over-determined system of equations with a non-linear function returning vector results. The aim is to minimize square norm of residuals. An approximate Newton's method solution is obtained by expanding into curtailed Taylor series
yielding
linear least squares problem for
Of course because of neglect of higher Taylor terms such solution is only approximate, if it ever exists. Now one could update expansion point to
and repeat the whole procedure, hoping that iterations converge to a solution
and that the solution is the one needed. Unfortunately neither is guaranteed and
must be verified.
Non-linear least squares may be also applied to the linear least squares problem by setting
and. This may be useful if Cholesky decomposition yields an inaccurate inverse for the triangle matrix where
, because of rounding errors. Such a procedure is called a differential correction of the solution. As long as iterations converge, by virtue of the Banach fixed-point theorem they yield the solution with a precision that is only limited by the precision of the calculated residuals. The precision is independent rounding errors in. Poor may restrict region of initial
yielding convergence or altogether preventing it. Usually convergence is slower e.g. linear
so that where constant
. Such slow convergence may be sped by Aitken method.
If calculation of is very costly, it is possible to use it from previous
iterations as long as convergence is maintained. Such Cholesky procedure may work even for Hilbert matrices,
notoriously difficult to invert.
Non-linear multi-variate functions may be minimized over their parameters using variants of Newton's method called quasi-Newton methods. At iteration k, the search steps in a direction defined by solving for, where is the step direction, is the gradient, and is an approximation to the Hessian matrix formed by repeating rank-1 updates at each iteration. Two well-known update formulas are called Davidon–Fletcher–Powell and Broyden–Fletcher–Goldfarb–Shanno. Loss of the positive-definite condition through round-off error is avoided if rather than updating an approximation to the inverse of the Hessian, one updates the Cholesky decomposition of an approximation of the Hessian matrix itself.

Monte Carlo simulation

The Cholesky decomposition is commonly used in the Monte Carlo method for simulating systems with multiple correlated variables. The covariance matrix is decomposed to give the lower-triangular. Applying this to a vector of uncorrelated observations in a sample produces a sample vector Lu with the covariance properties of the system being modeled.
The following simplified example shows the economy one gets from the Cholesky decomposition: suppose the goal is to generate two correlated normal variables and with given correlation coefficient. To accomplish that, it is necessary to first generate two uncorrelated Gaussian random variables and . Given the required correlation coefficient, the correlated normal variables can be obtained via the transformations and.

Kalman filters

s commonly use the Cholesky decomposition to choose a set of so-called sigma points. The Kalman filter tracks the average state of a system as a vector of length and covariance as an matrix. The matrix is always positive semi-definite and can be decomposed into LLT. The columns of can be added and subtracted from the mean to form a set of vectors called sigma points. These sigma points completely capture the mean and covariance of the system state.