Cross-validation (statistics)
Cross-validation, sometimes called rotation estimation or out-of-sample testing, is any of various similar model validation techniques for assessing how the results of a statistical analysis will generalize to an independent data set.
Cross-validation includes resampling and sample splitting methods that use different portions of the data to test and train a model on different iterations. It is often used in settings where the goal is prediction, and one wants to estimate how accurately a predictive model will perform in practice. It can also be used to assess the quality of a fitted model and the stability of its parameters.
In a prediction problem, a model is usually given a dataset of known data on which training is run, and a dataset of unknown data against which the model is tested. The goal of cross-validation is to test the model's ability to predict new data that was not used in estimating it, in order to flag problems like overfitting or selection bias and to give an insight on how the model will generalize to an independent dataset.
One round of cross-validation involves partitioning a sample of data into complementary subsets, performing the analysis on one subset, and validating the analysis on the other subset. To reduce variability, in most methods multiple rounds of cross-validation are performed using different partitions, and the validation results are combined over the rounds to give an estimate of the model's predictive performance.
In summary, cross-validation combines measures of fitness in prediction to derive a more accurate estimate of model prediction performance.
Motivation
Assume a model with one or more unknown parameters, and a data set to which the model can be fit. The fitting process optimizes the model parameters to make the model fit the training data as well as possible. If an independent sample of validation data is taken from the same population as the training data, it will generally turn out that the model does not fit the validation data as well as it fits the training data. The size of this difference is likely to be large especially when the size of the training data set is small, or when the number of parameters in the model is large. Cross-validation is a way to estimate the size of this effect.Example: linear regression
In linear regression, there exist real response values, and n ''p-dimensional vector covariates x1,..., xn''. The components of the vector xi are denoted xi1,..., xip. If least squares is used to fit a function in the form of a hyperplane ŷ = a + βTx to the data 1 ≤ i ≤ n, then the fit can be assessed using the mean squared error. The MSE for given estimated parameter values a and β on the training set 1 ≤ i ≤ n is defined as:If the model is correctly specified, it can be shown under mild assumptions that the expected value of the MSE for the training set is / < 1 times the expected value of the MSE for the validation set. Thus, a fitted model and computed MSE on the training set will result in an optimistically biased assessment of how well the model will fit an independent data set. This biased estimate is called the in-sample estimate of the fit, whereas the cross-validation estimate is an out-of-sample estimate.
Since in linear regression it is possible to directly compute the factor / by which the training MSE underestimates the validation MSE under the assumption that the model specification is valid, cross-validation can be used for checking whether the model has been overfitted, in which case the MSE in the validation set will substantially exceed its anticipated value.
General case
In most other regression procedures, there is no simple formula to compute the expected out-of-sample fit. Cross-validation is, thus, a generally applicable way to predict the performance of a model on unavailable data using numerical computation in place of theoretical analysis.Types
Two types of cross-validation can be distinguished: exhaustive and non-exhaustive cross-validation.Exhaustive cross-validation
Exhaustive cross-validation methods are cross-validation methods which learn and test on all possible ways to divide the original sample into a training and a validation set.Leave-p-out cross-validation
Leave-p-out cross-validation involves using p observations as the validation set and the remaining observations as the training set. This is repeated on all ways to cut the original sample on a validation set of p observations and a training set.LpO cross-validation require training and validating the model times, where n is the number of observations in the original sample, and where is the binomial coefficient. For p > 1 and for even moderately large n, LpO CV can become computationally infeasible. For example, with n = 100 and p = 30,
A variant of LpO cross-validation with p=2 known as leave-pair-out cross-validation has been recommended as a nearly unbiased method for estimating the area under ROC curve of binary classifiers.
Leave-one-out cross-validation
Leave-one-out cross-validation is a particular case of leave-p-out cross-validation with p = 1. The process looks similar to jackknife; however, with cross-validation one computes a statistic on the left-out sample, while with jackknifing one computes a statistic from the kept samples only.LOO cross-validation requires less computation time than LpO cross-validation because there are only passes rather than. However, passes may still require quite a large computation time, in which case other approaches such as k-fold cross validation may be more appropriate.
Pseudo-code algorithm:
Input:
x, y, interpolate, Output:
err, Steps:
err ← 0
for i ← 1,..., N do
// define the cross-validation subsets
x_in ←
y_in ←
x_out ← x
y_out ← interpolate
err ← err + ^2
end for
err ← err/N
Non-exhaustive cross-validation
Non-exhaustive cross validation methods do not compute all ways of splitting the original sample. These methods are approximations of leave-p-out cross-validation.''k''-fold cross-validation
In k-fold cross-validation, the original sample is randomly partitioned into k equal sized subsamples, often referred to as "folds". Of the k subsamples, a single subsample is retained as the validation data for testing the model, and the remaining k − 1 subsamples are used as training data. The cross-validation process is then repeated k times, with each of the k subsamples used exactly once as the validation data. The k results can then be averaged to produce a single estimation. The advantage of this method over repeated random sub-sampling is that all observations are used for both training and validation, and each observation is used for validation exactly once. 10-fold cross-validation is commonly used, but in general k remains an unfixed parameter.For example, setting k = 2 results in 2-fold cross-validation. In 2-fold cross-validation, we randomly shuffle the dataset into two sets d0 and d1, so that both sets are equal size. We then train on d0 and validate on d1, followed by training on d1 and validating on d0.
When k = n, k-fold cross-validation is equivalent to leave-one-out cross-validation.
In stratified ''k-fold cross-validation, the partitions are selected so that the mean response value is approximately equal in all the partitions. In the case of binary classification, this means that each partition contains roughly the same proportions of the two types of class labels.
In repeated cross-validation the data is randomly split into k'' partitions several times. The performance of the model can thereby be averaged over several runs, but this is rarely desirable in practice.
When many different statistical or machine learning models are being considered, greedy ''k''-fold cross-validation can be used to quickly identify the most promising candidate models.
Holdout method
In the holdout method, we randomly assign data points to two sets d0 and d1, usually called the training set and the test set, respectively. The size of each of the sets is arbitrary although typically the test set is smaller than the training set. We then train on d0 and test on d1.In typical cross-validation, results of multiple runs of model-testing are averaged together; in contrast, the holdout method, in isolation, involves a single run. It should be used with caution because without such averaging of multiple runs, one may achieve highly misleading results. One's indicator of predictive accuracy will tend to be unstable since it will not be smoothed out by multiple iterations. Similarly, indicators of the specific role played by various predictor variables will tend to be unstable.
While the holdout method can be framed as "the simplest kind of cross-validation", many sources instead classify holdout as a type of simple validation, rather than a simple or degenerate form of cross-validation.
Repeated random sub-sampling validation
This method, also known as Monte Carlo cross-validation, creates multiple random splits of the dataset into training and validation data. For each such split, the model is fit to the training data, and predictive accuracy is assessed using the validation data. The results are then averaged over the splits. The advantage of this method is that the proportion of the training/validation split is not dependent on the number of iterations. The disadvantage of this method is that some observations may never be selected in the validation subsample, whereas others may be selected more than once. In other words, validation subsets may overlap. This method also exhibits Monte Carlo variation, meaning that the results will vary if the analysis is repeated with different random splits.As the number of random splits approaches infinity, the result of repeated random sub-sampling validation tends towards that of leave-p-out cross-validation.
In a stratified variant of this approach, the random samples are generated in such a way that the mean response value is equal in the training and testing sets. This is particularly useful if the responses are dichotomous with an unbalanced representation of the two response values in the data.
A method that applies repeated random sub-sampling is RANSAC.