In statistics, the coefficient of determination, denoted R2 or r2 and pronounced "R squared", is the proportion of the variation in the dependent variable that is predictable from the independent variable(s).
It is a statistic used in the context of statistical models whose main purpose is either the prediction of future outcomes or the testing of hypotheses, on the basis of other related information. It provides a measure of how well observed outcomes are replicated by the model, based on the proportion of total variation of outcomes explained by the model.[1] [2] [3]
There are several definitions of R2 that are only sometimes equivalent. One class of such cases includes that of simple linear regression where r2 is used instead of R2. When only an intercept is included, then r2 is simply the square of the sample correlation coefficient (i.e., r) between the observed outcomes and the observed predictor values.[4] If additional regressors are included, R2 is the square of the coefficient of multiple correlation. In both such cases, the coefficient of determination normally ranges from 0 to 1.
There are cases where R2 can yield negative values. This can arise when the predictions that are being compared to the corresponding outcomes have not been derived from a model-fitting procedure using those data. Even if a model-fitting procedure has been used, R2 may still be negative, for example when linear regression is conducted without including an intercept,[5] or when a non-linear function is used to fit the data.[6] In cases where negative values arise, the mean of the data provides a better fit to the outcomes than do the fitted function values, according to this particular criterion.
The coefficient of determination can be more intuitively informative than MAE, MAPE, MSE, and RMSE in regression analysis evaluation, as the former can be expressed as a percentage, whereas the latter measures have arbitrary ranges. It also proved more robust for poor fits compared to SMAPE on the test datasets in the article.[7]
When evaluating the goodness-of-fit of simulated (Ypred) vs. measured (Yobs) values, it is not appropriate to base this on the R2 of the linear regression (i.e., Yobs= m·Ypred + b). The R2 quantifies the degree of any linear correlation between Yobs and Ypred, while for the goodness-of-fit evaluation only one specific linear correlation should be taken into consideration: Yobs = 1·Ypred + 0 (i.e., the 1:1 line).[8] [9]
A data set has n values marked y1, ..., yn (collectively known as yi or as a vector y = [''y''<sub>1</sub>, ..., ''y''<sub>''n''</sub>]T), each associated with a fitted (or modeled, or predicted) value f1, ..., fn (known as fi, or sometimes ŷi, as a vector f).
Define the residuals as (forming a vector e).
If
\bar{y}
The most general definition of the coefficient of determination is
In the best case, the modeled values exactly match the observed values, which results in
SSres=0
See main article: Fraction of variance unexplained. In a general form, R2 can be seen to be related to the fraction of variance unexplained (FVU), since the second term compares the unexplained variance (variance of the model's errors) with the total variance (of the data):
A larger value of R2 implies a more successful regression model.[4] Suppose . This implies that 49% of the variability of the dependent variable in the data set has been accounted for, and the remaining 51% of the variability is still unaccounted for. For regression models, the regression sum of squares, also called the explained sum of squares, is defined as
SSreg=\sumi(fi-\bar{y})2
In some cases, as in simple linear regression, the total sum of squares equals the sum of the two other sums of squares defined above:
SSres+SSreg=SStot
See Partitioning in the general OLS model for a derivation of this result for one case where the relation holds. When this relation does hold, the above definition of R2 is equivalent to
R2=
SSreg | |
SStot |
=
SSreg/n | |
SStot/n |
In this form R2 is expressed as the ratio of the explained variance (variance of the model's predictions, which is) to the total variance (sample variance of the dependent variable, which is).
This partition of the sum of squares holds for instance when the model values ƒi have been obtained by linear regression. A milder sufficient condition reads as follows: The model has the form
fi=\widehat\alpha+\widehat\betaqi
\widehat\alpha
\widehat\beta
This set of conditions is an important one and it has a number of implications for the properties of the fitted residuals and the modelled values. In particular, under these conditions:
\bar{f}=\bar{y}.
In linear least squares multiple regression with an estimated intercept term, R2 equals the square of the Pearson correlation coefficient between the observed
y
f
In a linear least squares regression with a single explanator but without an intercept term, this is also equal to the squared Pearson correlation coefficient of the dependent variable
y
x.
It should not be confused with the correlation coefficient between two explanatory variables, defined as
\rho\widehat\alpha,\widehat\beta={\operatorname{cov}\left(\widehat\alpha,\widehat\beta\right)\over\sigma\widehat\alpha\sigma\widehat\beta
(XTX)-1
Under more general modeling conditions, where the predicted values might be generated from a model different from linear least squares regression, an R2 value can be calculated as the square of the correlation coefficient between the original
y
f
R2 is a measure of the goodness of fit of a model.[11] In regression, the R2 coefficient of determination is a statistical measure of how well the regression predictions approximate the real data points. An R2 of 1 indicates that the regression predictions perfectly fit the data.
Values of R2 outside the range 0 to 1 occur when the model fits the data worse than the worst possible least-squares predictor (equivalent to a horizontal hyperplane at a height equal to the mean of the observed data). This occurs when a wrong model was chosen, or nonsensical constraints were applied by mistake. If equation 1 of Kvålseth[12] is used (this is the equation used most often), R2 can be less than zero. If equation 2 of Kvålseth is used, R2 can be greater than one.
In all instances where R2 is used, the predictors are calculated by ordinary least-squares regression: that is, by minimizing SSres. In this case, R2 increases as the number of variables in the model is increased (R2 is monotone increasing with the number of variables included—it will never decrease). This illustrates a drawback to one possible use of R2, where one might keep adding variables (kitchen sink regression) to increase the R2 value. For example, if one is trying to predict the sales of a model of car from the car's gas mileage, price, and engine power, one can include probably irrelevant factors such as the first letter of the model's name or the height of the lead engineer designing the car because the R2 will never decrease as variables are added and will likely experience an increase due to chance alone.
This leads to the alternative approach of looking at the adjusted R2. The explanation of this statistic is almost the same as R2 but it penalizes the statistic as extra variables are included in the model. For cases other than fitting by ordinary least squares, the R2 statistic can be calculated as above and may still be a useful measure. If fitting is by weighted least squares or generalized least squares, alternative versions of R2 can be calculated appropriate to those statistical frameworks, while the "raw" R2 may still be useful if it is more easily interpreted. Values for R2 can be calculated for any type of predictive model, which need not have a statistical basis.
Consider a linear model with more than a single explanatory variable, of the form
Yi=\beta0+
p | |
\sum | |
j=1 |
\betajXi,j+\varepsiloni,
{Yi}
Xi,1,...,Xi,p
\varepsiloni
\beta0,...,\betap
R2 is often interpreted as the proportion of response variation "explained" by the regressors in the model. Thus, R2 = 1 indicates that the fitted model explains all variability in
y
\bar{y}
A caution that applies to R2, as to other statistical descriptions of correlation and association is that "correlation does not imply causation." In other words, while correlations may sometimes provide valuable clues in uncovering causal relationships among variables, a non-zero estimated correlation between two variables is not, on its own, evidence that changing the value of one variable would result in changes in the values of other variables. For example, the practice of carrying matches (or a lighter) is correlated with incidence of lung cancer, but carrying matches does not cause cancer (in the standard sense of "cause").
In case of a single regressor, fitted by least squares, R2 is the square of the Pearson product-moment correlation coefficient relating the regressor and the response variable. More generally, R2 is the square of the correlation between the constructed predictor and the response variable. With more than one regressor, the R2 can be referred to as the coefficient of multiple determination.
In least squares regression using typical data, R2 is at least weakly increasing with an increase in number of regressors in the model. Because increases in the number of regressors increase the value of R2, R2 alone cannot be used as a meaningful comparison of models with very different numbers of independent variables. For a meaningful comparison between two models, an F-test can be performed on the residual sum of squares, similar to the F-tests in Granger causality, though this is not always appropriate. As a reminder of this, some authors denote R2 by Rq2, where q is the number of columns in X (the number of explanators including the constant).
To demonstrate this property, first recall that the objective of least squares linear regression is
minbSSres(b) ⇒ minb\sumi(yi-
2 | |
X | |
ib) |
The optimal value of the objective is weakly smaller as more explanatory variables are added and hence additional columns of
X
SStot
The intuitive reason that using an additional explanatory variable cannot lower the R2 is this: Minimizing
SSres
The above gives an analytical explanation of the inflation of R2. Next, an example based on ordinary least square from a geometric perspective is shown below. [14]
A simple case to be considered first:
Y=\beta0+\beta1 ⋅ X1+\varepsilon
R
Y=\beta0+\beta1 ⋅ X1+\beta2 ⋅ X2+\varepsilon
R2
\beta0
\beta0
X1
X2
R2
The smaller model space is a subspace of the larger one, and thereby the residual of the smaller model is guaranteed to be larger. Comparing the red and blue lines in the figure, the blue line is orthogonal to the space, and any other line would be larger than the blue one. Considering the calculation for R2, a smaller value of
SStot
R2 does not indicate whether:
The use of an adjusted R2 (one common notation is
\barR2
2 | |
R | |
a |
2 | |
R | |
adj |
\barR2={1-{SSres/dfres\overSStot/dftot
Inserting the degrees of freedom and using the definition of R2, it can be rewritten as:
\barR2=1-(1-R2){n-1\overn-p-1}
The adjusted R2 can be negative, and its value will always be less than or equal to that of R2. Unlike R2, the adjusted R2 increases only when the increase in R2 (due to the inclusion of a new explanatory variable) is more than one would expect to see by chance. If a set of explanatory variables with a predetermined hierarchy of importance are introduced into a regression one at a time, with the adjusted R2 computed each time, the level at which adjusted R2 reaches a maximum, and decreases afterward, would be the regression with the ideal combination of having the best fit without excess/unnecessary terms.
The adjusted R2 can be interpreted as an instance of the bias-variance tradeoff. When we consider the performance of a model, a lower error represents a better performance. When the model becomes more complex, the variance will increase whereas the square of bias will decrease, and these two metrices add up to be the total error. Combining these two trends, the bias-variance tradeoff describes a relationship between the performance of the model and its complexity, which is shown as a u-shape curve on the right. For the adjusted R2 specifically, the model complexity (i.e. number of parameters) affects the R2 and the term / frac and thereby captures their attributes in the overall performance of the model.
R2 can be interpreted as the variance of the model, which is influenced by the model complexity. A high R2 indicates a lower bias error because the model can better explain the change of Y with predictors. For this reason, we make fewer (erroneous) assumptions, and this results in a lower bias error. Meanwhile, to accommodate fewer assumptions, the model tends to be more complex. Based on bias-variance tradeoff, a higher complexity will lead to a decrease in bias and a better performance (below the optimal line). In 2, the term will be lower with high complexity and resulting in a higher 2, consistently indicating a better performance.
On the other hand, the term/frac term is reversely affected by the model complexity. The term/frac will increase when adding regressors (i.e. increased model complexity) and lead to worse performance. Based on bias-variance tradeoff, a higher model complexity (beyond the optimal line) leads to increasing errors and a worse performance.
Considering the calculation of 2, more parameters will increase the R2 and lead to an increase in 2. Nevertheless, adding more parameters will increase the term/frac and thus decrease 2. These two trends construct a reverse u-shape relationship between model complexity and 2, which is in consistent with the u-shape trend of model complexity vs. overall performance. Unlike R2, which will always increase when model complexity increases, 2 will increase only when the bias eliminated by the added regressor is greater than the variance introduced simultaneously. Using 2 instead of R2 could thereby prevent overfitting.
Following the same logic, adjusted R2 can be interpreted as a less biased estimator of the population R2, whereas the observed sample R2 is a positively biased estimate of the population value.[18] Adjusted R2 is more appropriate when evaluating model fit (the variance in the dependent variable accounted for by the independent variables) and in comparing alternative models in the feature selection stage of model building.
The principle behind the adjusted R2 statistic can be seen by rewriting the ordinary R2 as
R2={1-{VARres\overVARtot
VARres=SSres/n
VARtot=SStot/n
VARres=SSres/(n-p)
VARtot=SStot/(n-1)
Despite using unbiased estimators for the population variances of the error and the dependent variable, adjusted R2 is not an unbiased estimator of the population R2,[18] which results by using the population variances of the errors and the dependent variable instead of estimating them. Ingram Olkin and John W. Pratt derived the minimum-variance unbiased estimator for the population R2,[19] which is known as Olkin–Pratt estimator. Comparisons of different approaches for adjusting R2 concluded that in most situations either an approximate version of the Olkin–Pratt estimator [18] or the exact Olkin–Pratt estimator [20] should be preferred over (Ezekiel) adjusted R2.
See also: Partial correlation.
The coefficient of partial determination can be defined as the proportion of variation that cannot be explained in a reduced model, but can be explained by the predictors specified in a full(er) model.[21] [22] This coefficient is used to provide insight into whether or not one or more additional predictors may be useful in a more fully specified regression model.
The calculation for the partial R2 is relatively straightforward after estimating two models and generating the ANOVA tables for them. The calculation for the partial R2 is
SSres,reduced-SSres,full | |
SSres,reduced |
,
SStot-SSres | |
SStot |
.
As explained above, model selection heuristics such as the adjusted R2 criterion and the F-test examine whether the total R2 sufficiently increases to determine if a new regressor should be added to the model. If a regressor is added to the model that is highly correlated with other regressors which have already been included, then the total R2 will hardly increase, even if the new regressor is of relevance. As a result, the above-mentioned heuristics will ignore relevant regressors when cross-correlations are high.[23]
Alternatively, one can decompose a generalized version of R2 to quantify the relevance of deviating from a hypothesis. As Hoornweg (2018) shows, several shrinkage estimators – such as Bayesian linear regression, ridge regression, and the (adaptive) lasso – make use of this decomposition of R2 when they gradually shrink parameters from the unrestricted OLS solutions towards the hypothesized values. Let us first define the linear regression model as
y=X\beta+\varepsilon.
y
\beta0
b
| ||||
R |
.
\beta0
\beta0
The individual effect on R2 of deviating from a hypothesis can be computed with
R ⊗
p
p
R ⊗ =(X'\tildey0)(X'\tildey0)'(X'X)-1(\tildey0'\tilde
-1 | |
y | |
0) |
,
\tildey0=y-X\beta0
R ⊗
\beta0
jth
R ⊗
xj
y
xi
xj
⊗ | |
R | |
ii |
⊗ | |
R | |
jj |
R ⊗
R ⊗
In the case of logistic regression, usually fit by maximum likelihood, there are several choices of pseudo-R2.
One is the generalized R2 originally proposed by Cox & Snell,[24] and independently by Magee:[25]
R2=1-\left({l{L}(0)\overl{L}(\widehat{\theta})}\right)2/n
l{L}(0)
{l{L}(\widehat{\theta})}
R2=1-
| |||||
e |
(0))-ln(l{L}(\widehat{\theta}))}=1-e-D/n
Nico Nagelkerke noted that it had the following properties:[26] [27]
However, in the case of a logistic model, where
l{L}(\widehat{\theta})
2 | |
R | |
max |
=1-(l{L}(0))2/n
Occasionally, the norm of residuals is used for indicating goodness of fit. This term is calculated as the square-root of the sum of squares of residuals:
normofresiduals=\sqrt{SSres
Both R2 and the norm of residuals have their relative merits. For least squares analysis R2 varies between 0 and 1, with larger numbers indicating better fits and 1 representing a perfect fit. The norm of residuals varies from 0 to infinity with smaller numbers indicating better fits and zero indicating a perfect fit. One advantage and disadvantage of R2 is the
SStot
x | 1 | 2 | 3 | 4 | 5 | |
---|---|---|---|---|---|---|
y | 1.9 | 3.7 | 5.8 | 8.0 | 9.6 |
R2 = 0.998, and norm of residuals = 0.302.If all values of y are multiplied by 1000 (for example, in an SI prefix change), then R2 remains the same, but norm of residuals = 302.
Another single-parameter indicator of fit is the RMSE of the residuals, or standard deviation of the residuals. This would have a value of 0.135 for the above example given that the fit was linear with an unforced intercept.[28]
The creation of the coefficient of determination has been attributed to the geneticist Sewall Wright and was first published in 1921.[29]