Partial least squares regression
Partial least squares regression is a statistical method that bears some relation to principal components regression and is a reduced rank regression; instead of finding hyperplanes of maximum variance between the response and independent variables, it finds a linear regression model by projecting the predicted variables and the observable variables to a new space of maximum covariance. Because both the X and Y data are projected to new spaces, the PLS family of methods are known as bilinear factor models. Partial least squares discriminant analysis is a variant used when the Y is categorical.
PLS is used to find the fundamental relations between two matrices, i.e. a latent variable approach to modeling the covariance structures in these two spaces. A PLS model will try to find the multidimensional direction in the X space that explains the maximum multidimensional variance direction in the Y space. PLS regression is particularly suited when the matrix of predictors has more variables than observations, and when there is multicollinearity among X values. By contrast, standard regression will fail in these cases.
Partial least squares was introduced by the Swedish statistician Herman O. A. Wold, who then developed it with his son, Svante Wold. An alternative term for PLS is projection to latent structures, but the term partial least squares is still dominant in many areas. Although the original applications were in the social sciences, PLS regression is today most widely used in chemometrics and related areas. It is also used in bioinformatics, sensometrics, neuroscience, and anthropology.
Core idea
[Image:Core Idea PLS.png|thumb|450px|Core Idea of PLS. The loading vectors in the input and output space are drawn in red (not normalized for better visibility). When increases (independent of ), and increase.]We are given a sample of paired observations.
In the first step, the partial least squares regression searches for the normalized direction, that maximizes the covariance
Note below, the algorithm is denoted in matrix notation.
Underlying model
The general underlying model of multivariate PLS with components iswhere
- is an matrix of predictors
- is an matrix of responses
- and are matrices that are, respectively, projections of and projections of
- and are, respectively, and loading matrices
- and matrices and are the error terms, assumed to be independent and identically distributed random normal variables.
Note that this covariance is defined pair by pair: the covariance of column i of with the column i of is maximized. Additionally, the covariance of the column i of with the column j of is zero.
In PLSR, the loadings are thus chosen so that the scores form an orthogonal basis. This is a major difference with PCA where orthogonality is imposed onto loadings.
Algorithms
A number of variants of PLS exist for estimating the factor and loading matrices and. Most of them construct estimates of the linear regression between and as. Some PLS algorithms are only appropriate for the case where is a column vector, while others deal with the general case of a matrix. Algorithms also differ on whether they estimate the factor matrix as an orthogonal matrix or not.The final prediction will be the same for all these varieties of PLS, but the components will differ.
PLS is composed of iteratively repeating the following steps k times :
- finding the directions of maximal covariance in input and output space
- performing least squares regression on the input score
- deflating the input and/or target
PLS1
PLS1 is a widely used algorithm appropriate for the vector case. It estimates as an orthonormal matrix.In pseudocode it is expressed below.
1
2
3, an initial estimate of.
4
5
6
7
8
9
10
11
12
13
14
15
16 define to be the matrix
Do the same to form the matrix and vector.
17
18
19
This form of the algorithm does not require centering of the input and, as this is performed implicitly by the algorithm.
This algorithm features 'deflation' of the matrix, but deflation of the vector is not performed, as it is not necessary. The user-supplied variable is the limit on the number of latent factors in the regression; if it equals the rank of the matrix, the algorithm will yield the least squares regression estimates for and
Extensions
OPLS
In 2002 a new method was published called orthogonal projections to latent structures. In OPLS, continuous variable data is separated into predictive and uncorrelated information. This leads to improved diagnostics, as well as more easily interpreted visualization. However, these changes only improve the interpretability, not the predictivity, of the PLS models. Similarly, OPLS-DA may be applied when working with discrete variables, as in classification and biomarker studies.The general underlying model of OPLS is
or in O2-PLS