Coefficient of determination


In statistics, the coefficient of determination, denoted R2 or r2 and pronounced "R squared", is the proportion of the variation in the dependent variable that is predictable from the independent variable.
It is a statistic used in the context of statistical models whose main purpose is either the prediction of future outcomes or the testing of hypotheses, on the basis of other related information. It provides a measure of how well observed outcomes are replicated by the model, based on the proportion of total variation of outcomes explained by the model.
There are several definitions of R2 that are only sometimes equivalent. In simple linear regression, r2 is simply the square of the sample correlation coefficient, between the observed outcomes and the observed predictor values. If additional regressors are included, R2 is the square of the coefficient of multiple correlation. In both such cases, the coefficient of determination normally ranges from 0 to 1.
There are cases where R2 can yield negative values. This can arise when the predictions that are being compared to the corresponding outcomes have not been derived from a model-fitting procedure using those data. Even if a model-fitting procedure has been used, R2 may still be negative, for example when linear regression is conducted without including an intercept, or when a non-linear function is used to fit the data. In cases where negative values arise, the mean of the data provides a better fit to the outcomes than do the fitted function values, according to this particular criterion.
The coefficient of determination can be more intuitively informative than MAE, MAPE, MSE, and RMSE in regression analysis evaluation, as the former can be expressed as a percentage, whereas the latter measures have arbitrary ranges. It also proved more robust for poor fits compared to SMAPE on certain test datasets.
When evaluating the goodness-of-fit of simulated versus measured values, it is not appropriate to base this on the R2 of the linear regression. The R2 quantifies the degree of any linear correlation between Yobs and Ypred, while for the goodness-of-fit evaluation only one specific linear correlation should be taken into consideration: Yobs = 1·Ypred + 0.

Definitions

A data set has n values marked y1,..., yn, each associated with a fitted value f1,..., fn.
Define the residuals as .
If is the mean of the observed data:
then the variability of the data set can be measured with two sums of squares formulas:
The most general definition of the coefficient of determination is
In the best case, the modeled values exactly match the observed values, which results in and. A baseline model, which always predicts, will have.

Relation to unexplained variance

In a general form, R2 can be seen to be related to the fraction of variance unexplained, since the second term compares the unexplained variance with the total variance :

As explained variance

A larger value of R2 implies a more successful regression model.
Suppose. This implies that 49% of the variability of the dependent variable in the data set has been accounted for, and the remaining 51% of the variability is still unaccounted for.
For regression models, the regression sum of squares, also called the explained sum of squares, is defined as
In some cases, as in simple linear regression, the total sum of squares equals the sum of the two other sums of squares defined above:
See Partitioning in the general OLS model for a derivation of this result for one case where the relation holds. When this relation does hold, the above definition of R2 is equivalent to
where n is the number of observations on the variables.
In this form R2 is expressed as the ratio of the explained variance to the total variance.
This partition of the sum of squares holds for instance when the model values ƒi have been obtained by linear regression. A milder sufficient condition reads as follows: The model has the form
where the qi are arbitrary values that may or may not depend on i or on other free parameters, and the coefficient estimates and are obtained by minimizing the residual sum of squares.
This set of conditions is an important one and it has a number of implications for the properties of the fitted residuals and the modelled values. In particular, under these conditions:

As squared correlation coefficient

In linear least squares multiple regression, R2 equals the square of the Pearson correlation coefficient between the observed and modeled data values of the dependent variable.
In a linear least squares regression with a single explanator, this is also equal to the squared Pearson correlation coefficient between the dependent variable and explanatory variable.
It should not be confused with the correlation coefficient between two explanatory variables, defined as
where the covariance between two coefficient estimates, as well as their standard deviations, are obtained from the covariance matrix of the coefficient estimates,.
Under more general modeling conditions, where the predicted values might be generated from a model different from linear least squares regression, an R2 value can be calculated as the square of the correlation coefficient between the original and modeled data values. In this case, the value is not directly a measure of how good the modeled values are, but rather a measure of how good a predictor might be constructed from the modeled values. According to Everitt, this usage is specifically the definition of the term "coefficient of determination": the square of the correlation between two variables.

Interpretation

R2 is a measure of the goodness of fit of a model. In regression, the R2 coefficient of determination is a statistical measure of how well the regression predictions approximate the real data points. An R2 of 1 indicates that the regression predictions perfectly fit the data.
Values of R2 outside the range 0 to 1 occur when the model fits the data worse than the worst possible least-squares predictor. This occurs when a wrong model was chosen, or nonsensical constraints were applied by mistake. If equation 1 of Kvålseth is used, R2 can be less than zero. If equation 2 of Kvålseth is used, R2 can be greater than one.
In all instances where R2 is used, the predictors are calculated by ordinary least-squares regression: that is, by minimizing SSres. In this case, R2 increases as the number of variables in the model is increased. This illustrates a drawback to one possible use of R2, where one might keep adding variables to increase the R2 value. For example, if one is trying to predict the sales of a model of car from the car's gas mileage, price, and engine power, one can include probably irrelevant factors such as the first letter of the model's name or the height of the lead engineer designing the car because the R2 will never decrease as variables are added and will likely experience an increase due to chance alone.
This leads to the alternative approach of looking at the adjusted R2. The explanation of this statistic is almost the same as R2 but it penalizes the statistic as extra variables are included in the model. For cases other than fitting by ordinary least squares, the R2 statistic can be calculated as above and may still be a useful measure. If fitting is by weighted least squares or generalized least squares, alternative versions of R2 can be calculated appropriate to those statistical frameworks, while the "raw" R2 may still be useful if it is more easily interpreted. Values for R2 can be calculated for any type of predictive model, which need not have a statistical basis.

In a multiple linear model

Consider a linear model with more than a single explanatory variable, of the form
where, for the ith case, is the response variable, are p regressors, and is a mean zero error term. The quantities are unknown coefficients, whose values are estimated by least squares. The coefficient of determination R2 is a measure of the global fit of the model. Specifically, R2 is an element of and represents the proportion of variability in Yi that may be attributed to some linear combination of the regressors in X.
R2 is often interpreted as the proportion of response variation "explained" by the regressors in the model. Thus, R2 = 1 indicates that the fitted model explains all variability in, while R2 = 0 indicates no 'linear' relationship. An interior value such as R2 = 0.7 may be interpreted as follows: "Seventy percent of the variance in the response variable can be explained by the explanatory variables. The remaining thirty percent can be attributed to unknown, lurking variables or inherent variability."
A caution that applies to R2, as to other statistical descriptions of correlation and association is that "correlation does not imply causation". In other words, while correlations may sometimes provide valuable clues in uncovering causal relationships among variables, a non-zero estimated correlation between two variables is not, on its own, evidence that changing the value of one variable would result in changes in the values of other variables. For example, the practice of carrying matches is correlated with incidence of lung cancer, but carrying matches does not cause cancer.
In case of a single regressor, fitted by least squares, R2 is the square of the Pearson product-moment correlation coefficient relating the regressor and the response variable. More generally, R2 is the square of the correlation between the constructed predictor and the response variable. With more than one regressor, the R2 can be referred to as the coefficient of multiple determination.