Principal component analysis


Principal component analysis is a linear dimensionality reduction technique with applications in exploratory data analysis, visualization and data preprocessing.
The data are linearly transformed onto a new coordinate system such that the directions capturing the largest variation in the data can be easily identified.
The principal components of a collection of points in a real coordinate space are a sequence of unit vectors, where the -th vector is the direction of a line that best fits the data while being orthogonal to the first vectors. Here, a best-fitting line is defined as one that minimizes the average squared perpendicular distance from the points to the line. These directions constitute an orthonormal basis in which different individual dimensions of the data are linearly uncorrelated. Many studies use the first two principal components in order to plot the data in two dimensions and to visually identify clusters of closely related data points.
Principal component analysis has applications in many fields such as population genetics, microbiome studies, and atmospheric science.

Overview

When performing PCA, the first principal component of a set of variables is the derived variable formed as a linear combination of the original variables that explains the most variance. The second principal component explains the most variance in what is left once the effect of the first component is removed, and we may proceed through iterations until all the variance is explained. PCA is most commonly used when many of the variables are highly correlated with each other and it is desirable to reduce their number to an independent set.
The first principal component can equivalently be defined as a direction that maximizes the variance of the projected data. The -th principal component can be taken as a direction orthogonal to the first principal components that maximizes the variance of the projected data.
For either objective, it can be shown that the principal components are eigenvectors of the data's covariance matrix. Thus, the principal components are often computed by eigendecomposition of the data covariance matrix or singular value decomposition of the data matrix. PCA is the simplest of the true eigenvector-based multivariate analyses and is closely related to factor analysis. Factor analysis typically incorporates more domain-specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis. CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset. Robust and L1-norm-based variants of standard PCA have also been proposed.

History

PCA was invented in 1901 by Karl Pearson, as an analogue of the principal axis theorem in mechanics; it was later independently developed and named by Harold Hotelling in the 1930s. Depending on the field of application, it is also named the discrete Karhunen–Loève transform in signal processing, the Hotelling transform in multivariate quality control, proper orthogonal decomposition in mechanical engineering, singular value decomposition of X, eigenvalue decomposition of XTX in linear algebra, factor analysis, Eckart–Young theorem, or empirical orthogonal functions in meteorological science, empirical eigenfunction decomposition, quasiharmonic modes, spectral decomposition in noise and vibration, and empirical modal analysis in structural dynamics.

Intuition

PCA can be thought of as fitting a p-dimensional ellipsoid to the data, where each axis of the ellipsoid represents a principal component. If some axis of the ellipsoid is small, then the variance along that axis is also small.
To find the axes of the ellipsoid, we must first center the values of each variable in the dataset on 0 by subtracting the mean of the variable's observed values from each of those values. These transformed values are used instead of the original observed values for each of the variables. Then, we compute the covariance matrix of the data and calculate the eigenvalues and corresponding eigenvectors of this covariance matrix. Then we must normalize each of the orthogonal eigenvectors to turn them into unit vectors. Once this is done, each of the mutually-orthogonal unit eigenvectors can be interpreted as an axis of the ellipsoid fitted to the data. This choice of basis will transform the covariance matrix into a diagonalized form, in which the diagonal elements represent the variance of each axis. The proportion of the variance that each eigenvector represents can be calculated by dividing the eigenvalue corresponding to that eigenvector by the sum of all eigenvalues.
Biplots and scree plots are used to interpret findings of the PCA.

Details

PCA is defined as an orthogonal linear transformation on a real inner product space that transforms the data to a new coordinate system such that the greatest variance by some scalar projection of the data comes to lie on the first coordinate, the second greatest variance on the second coordinate, and so on.
Consider an data matrix, X, with column-wise zero empirical mean, where each of the n rows represents a different repetition of the experiment, and each of the p columns gives a particular kind of feature.
Mathematically, the transformation is defined by a set of size of -dimensional vectors of weights or coefficients that map each row vector of X to a new vector of principal component scores, given by
in such a way that the individual variables of t considered over the data set successively inherit the maximum possible variance from X, with each coefficient vector w constrained to be a unit vector.
The above may equivalently be written in matrix form as
where
, and

First component

In order to maximize variance, the first weight vector w thus has to satisfy
Equivalently, writing this in matrix form gives
Since w has been defined to be a unit vector, it equivalently also satisfies
The quantity to be maximised can be recognised as a Rayleigh quotient. A standard result for a positive semidefinite matrix such as XTX is that the quotient's maximum possible value is the largest eigenvalue of the matrix, which occurs when w is the corresponding eigenvector.
With w found, the first principal component of a data vector x can then be given as a score t1 = xw in the transformed co-ordinates, or as the corresponding vector in the original variables, w.

Further components

The k-th component can be found by subtracting the first k − 1 principal components from X:
and then finding the weight vector which extracts the maximum variance from this new data matrix
It turns out that this gives the remaining eigenvectors of XTX, with the maximum values for the quantity in brackets given by their corresponding eigenvalues. Thus the weight vectors are eigenvectors of XTX.
The k-th principal component of a data vector x can therefore be given as a score tk = xw in the transformed coordinates, or as the corresponding vector in the space of the original variables, w, where w is the kth eigenvector of XTX.
The full principal components decomposition of X can therefore be given as
where W is a p-by-p matrix of weights whose columns are the eigenvectors of XTX. The transpose of W is sometimes called the whitening or sphering transformation. Columns of W multiplied by the square root of corresponding eigenvalues, that is, eigenvectors scaled up by the variances, are called loadings in PCA or in Factor analysis.

Covariances

XTX itself can be recognized as proportional to the empirical sample covariance matrix of the dataset XT.
The sample covariance Q between two of the different principal components over the dataset is given by:
where the eigenvalue property of w has been used to move from line 2 to line 3. However eigenvectors w and w corresponding to eigenvalues of a symmetric matrix are orthogonal, or can be orthogonalised. The product in the final line is therefore zero; there is no sample covariance between different principal components over the dataset.
Another way to characterise the principal components transformation is therefore as the transformation to coordinates which diagonalise the empirical sample covariance matrix.
In matrix form, the empirical covariance matrix for the original variables can be written
The empirical covariance matrix between the principal components becomes
where Λ is the diagonal matrix of eigenvalues λ of XTX. λ is equal to the sum of the squares over the dataset associated with each component k, that is, λ = Σi tk2 = Σiw2.

Dimensionality reduction

The transformation P = X 'W maps a data vector x' from an original space of x variables to a new space of p variables which are uncorrelated over the dataset.
To non-dimensionalize the centered data, let Xc represent the characteristic values of data vectors Xi, given by:
  • ,
  • , or
  • ,
for a dataset of size n. These norms are used to transform the original space of variables x, y to a new space of uncorrelated variables p, q, such that ;
and the new variables are linearly related as:.
To find the optimal linear relationship, we minimize the total squared reconstruction error:
; such that setting the derivative of the error function to zero yields: where.
File:PCA of Haplogroup J using 37 STRs.png|thumb|right|A principal components analysis scatterplot of Y-STR haplotypes calculated from repeat-count values for 37 Y-chromosomal STR markers from 354 individuals.
PCA has successfully found linear combinations of the markers that separate out different clusters corresponding to different lines of individuals' Y-chromosomal genetic descent.
Such dimensionality reduction can be a very useful step for visualising and processing high-dimensional datasets, while still retaining as much of the variance in the dataset as possible. For example, selecting L = 2 and keeping only the first two principal components finds the two-dimensional plane through the high-dimensional dataset in which the data are most spread out, so if the data contains clusters these too may be most spread out, and therefore most visible to be plotted out in a two-dimensional diagram; whereas if two directions through the data are chosen at random, the clusters may be much less spread apart from each other, and may in fact be much more likely to substantially overlay each other, making them indistinguishable.
Similarly, in regression analysis, the larger the number of explanatory variables allowed, the greater is the chance of overfitting the model, producing conclusions that fail to generalise to other datasets. One approach, especially when there are strong correlations between different possible explanatory variables, is to reduce them to a few principal components and then run the regression against them, a method called principal component regression.
Dimensionality reduction may also be appropriate when the variables in a dataset are noisy. If each column of the dataset contains independent identically distributed Gaussian noise, then the columns of T will also contain similarly identically distributed Gaussian noise. However, with more of the total variance concentrated in the first few principal components compared to the same noise variance, the proportionate effect of the noise is less—the first few components achieve a higher signal-to-noise ratio. PCA thus can have the effect of concentrating much of the signal into the first few principal components, which can usefully be captured by dimensionality reduction; while the later principal components may be dominated by noise, and so disposed of without great loss. If the dataset is not too large, the significance of the principal components can be tested using parametric bootstrap, as an aid in determining how many principal components to retain.