K-nearest neighbors algorithm


In statistics, the k-nearest neighbors algorithm is a non-parametric supervised learning method. It was first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover.
Most often, it is used for classification, as a
k-NN classifier, the output of which is a class membership. An object is classified by a plurality vote of its neighbors, with the object being assigned to the class most common among its k nearest neighbors. If k = 1, then the object is simply assigned to the class of that single nearest neighbor.
The k-NN algorithm can also be generalized for regression. In -NN regression, also known as nearest neighbor smoothing, the output is the property value for the object. This value is the average of the values of k nearest neighbors. If k = 1, then the output is simply assigned to the value of that single nearest neighbor, also known as nearest neighbor interpolation.
For both classification and regression, a useful technique can be to assign weights to the contributions of the neighbors, so that nearer neighbors contribute more to the average than distant ones. For example, a common weighting scheme consists of giving each neighbor a weight of 1/d, where d is the distance to the neighbor.
The input consists of the k closest training examples in a data set.
The neighbors are taken from a set of objects for which the class or the object property value is known. This can be thought of as the training set for the algorithm, though no explicit training step is required.
A peculiarity of the k-NN algorithm is its sensitivity to the local structure of the data.
In k-NN classification the function is only approximated locally and all computation is deferred until function evaluation. Since this algorithm relies on distance, if the features represent different physical units or come in vastly different scales, then feature-wise normalizing of the training data can greatly improve its accuracy.

Statistical setting

Suppose we have pairs taking values in, where is the class label of, so that for . Given some norm on and a point, let be a reordering of the training data such that.

Algorithm

The training examples are vectors in a multidimensional feature space, each with a class label. The training phase of the algorithm consists only of storing the feature vectors and class labels of the training samples.
In the classification phase, k is a user-defined constant, and an unlabeled vector is classified by assigning the label which is most frequent among the k training samples nearest to that query point.
A commonly used distance metric for continuous variables is Euclidean distance. For discrete variables, such as for text classification, another metric can be used, such as the overlap metric. In the context of gene expression microarray data, for example, k-NN has been employed with correlation coefficients, such as Pearson and Spearman, as a metric. Often, the classification accuracy of k-NN can be improved significantly if the distance metric is learned with specialized algorithms such as large margin nearest neighbor or neighborhood components analysis.
A drawback of the basic "majority voting" classification occurs when the class distribution is skewed. That is, examples of a more frequent class tend to dominate the prediction of the new example, because they tend to be common among the k nearest neighbors due to their large number. One way to overcome this problem is to weight the classification, taking into account the distance from the test point to each of its k nearest neighbors. The class of each of the k nearest points is multiplied by a weight proportional to the inverse of the distance from that point to the test point. Another way to overcome skew is by abstraction in data representation. For example, in a self-organizing map, each node is a representative of a cluster of similar points, regardless of their density in the original training data. k-NN can then be applied to the SOM.

Parameter selection

The best choice of k depends upon the data; generally, larger values of k reduces effect of the noise on the classification, but make boundaries between classes less distinct. A good k can be selected by various heuristic techniques. The special case where the class is predicted to be the class of the closest training sample is called the nearest neighbor algorithm.
The accuracy of the k-NN algorithm can be severely degraded by the presence of noisy or irrelevant features, or if the feature scales are not consistent with their importance. Much research effort has been put into selecting or scaling features to improve classification. A particularly popular approach is the use of evolutionary algorithms to optimize feature scaling. Another popular approach is to scale features by the mutual information of the training data with the training classes.
In binary classification problems, it is helpful to choose k to be an odd number as this avoids tied votes. One popular way of choosing the empirically optimal k in this setting is via bootstrap method.

The 1-nearest neighbor classifier

The most intuitive nearest neighbour type classifier is the one nearest neighbour classifier that assigns a point to the class of its closest neighbour in the feature space, that is.
As the size of training data set approaches infinity, the one nearest neighbour classifier guarantees an error rate of no worse than twice the Bayes error rate.

The weighted nearest neighbour classifier

The -nearest neighbour classifier can be viewed as assigning the nearest neighbours a weight and all others weight. This can be generalised to weighted nearest neighbour classifiers. That is, where the th nearest neighbour is assigned a weight, with. An analogous result on the strong consistency of weighted nearest neighbour classifiers also holds.
Let denote the weighted nearest classifier with weights. Subject to regularity conditions, which in asymptotic theory are conditional variables which require assumptions to differentiate among parameters with some criteria. On the class distributions the excess risk has the following asymptotic expansion
for constants and where and.
The optimal weighting scheme, that balances the two terms in the display above, is given as follows: set,
for and
for.
With optimal weights the dominant term in the asymptotic expansion of the excess risk is. Similar results are true when using a bagged nearest neighbour classifier.

Properties

k-NN is a special case of a variable-bandwidth, kernel density "balloon" estimator with a uniform kernel.
The naive version of the algorithm is easy to implement by computing the distances from the test example to all stored examples, but it is computationally intensive for large training sets. Using an approximate nearest neighbor search algorithm makes k-NN computationally tractable even for large data sets. Many nearest neighbor search algorithms have been proposed over the years; these generally seek to reduce the number of distance evaluations actually performed.
k-NN has some strong consistency results. As the amount of data approaches infinity, the two-class k-NN algorithm is guaranteed to yield an error rate no worse than twice the Bayes error rate. Various improvements to the k-NN speed are possible by using proximity graphs.
For multi-class k-NN classification, Cover and Hart prove an upper bound error rate of
where is the Bayes error rate, is the asymptotic k-NN error rate, and is the number of classes in the problem. This bound is tight in the sense that both the lower and upper bounds are achievable by some distribution. For and as the Bayesian error rate approaches zero, this limit reduces to "not more than twice the Bayesian error rate".

Error rates

There are many results on the error rate of the nearest neighbour classifiers. The -nearest neighbour classifier is strongly consistent provided diverges and converges to zero as.
Let denote the nearest neighbour classifier based on a training set of size. Under certain regularity conditions, the excess risk yields the following asymptotic expansion
for some constants and.
The choice offers a trade off between the two terms in the above display, for which the -nearest neighbour error converges to the Bayes error at the optimal rate.

Metric learning

The K-nearest neighbor classification performance can often be significantly improved through metric learning. Popular algorithms are neighbourhood components analysis and large margin nearest neighbor. Supervised metric learning algorithms use the label information to learn a new metric or pseudo-metric.

Feature extraction

When the input data to an algorithm is too large to be processed and it is suspected to be redundant then the input data will be transformed into a reduced representation set of features. Transforming the input data into the set of features is called feature extraction. If the features extracted are carefully chosen it is expected that the features set will extract the relevant information from the input data in order to perform the desired task using this reduced representation instead of the full size input. Feature extraction is performed on raw data prior to applying k-NN algorithm on the transformed data in feature space.
An example of a typical computer vision computation pipeline for face recognition using k-NN including feature extraction and dimension reduction pre-processing steps :
  1. Haar face detection
  2. Mean-shift tracking analysis
  3. PCA or Fisher LDA projection into feature space, followed by k-NN classification