Low-rank matrix approximations
Low-rank matrix approximations are essential tools in the application of kernel methods to large-scale learning problems.
Kernel methods project data points into a high-dimensional or infinite-dimensional feature space and find the optimal splitting hyperplane. In the kernel method the data is represented in a kernel matrix. Many algorithms can solve machine learning problems using the kernel matrix. The main problem of kernel method is its high computational cost associated with kernel matrices. The cost is at least quadratic in the number of training data points, but most kernel methods include computation of matrix inversion or eigenvalue decomposition and the cost becomes cubic in the number of training data. Large training sets cause large storage and computational costs. While low rank decomposition methods reduce this cost, they still require computing the kernel matrix. One of the approaches to deal with this problem is low-rank matrix approximations. The most popular examples of them are the Nyström approximation and randomized feature maps approximation methods. Both of them have been successfully applied to efficient kernel learning.
Nyström approximation
Kernel methods become computationally unfeasible when the number of points is so large such that the kernel matrix cannot be stored in memory.If is the number of training examples, the storage and computational cost required to find the solution of the problem using general kernel method is and respectively. The Nyström approximation can allow a significant speed-up of the computations. This speed-up is achieved by using, instead of the kernel matrix, its approximation of rank. An advantage of the method is that it is not necessary to compute or store the whole kernel matrix, but only a submatrix of size.
It reduces the storage and complexity requirements to and respectively.
The method is named "Nyström approximation" because it can be interpreted as a case of the Nyström method from integral equation theory.
Kernel approximation
Consider a positive-definite kernel function. Given some data points, we can form the kernel matrix such that.Now, let be an integer, then we can divide the kernel matrix as, where is the top-left corner of it. Also, set to be its first columns.
The Nyström approximation of in this case is where is the Moore–Penrose pseudoinverse of.
Properties
By Mercer's theorem, we can decompose the kernel matrix as a Gram matrix:, where. Let be the left columns of.Regularized least squares
In a vector and kernel notation, the problem of regularized least squares can be rewritten as:Computing the gradient and setting in to 0, the minimum can be obtained:
The inverse matrix can be computed using Woodbury matrix identity:
It has the desired storage and complexity requirements.
Randomized feature maps approximation
Let – samples of data, – a randomized feature map so that the inner product between a pair of transformed points approximates their kernel evaluation:where is the mapping embedded in the RBF kernel.
Since is low-dimensional, the input can be easily transformed with, after that different linear learning methods to approximate the answer of the corresponding nonlinear kernel can be applied. There are different randomized feature maps to compute the approximations to the RBF kernels. For instance, random Fourier features and random binning features.