Cook's distance explained
In statistics, Cook's distance or Cook's D is a commonly used estimate of the influence of a data point when performing a least-squares regression analysis.[1] In a practical ordinary least squares analysis, Cook's distance can be used in several ways: to indicate influential data points that are particularly worth checking for validity; or to indicate regions of the design space where it would be good to be able to obtain more data points. It is named after the American statistician R. Dennis Cook, who introduced the concept in 1977.[2] [3]
Definition
Data points with large residuals (outliers) and/or high leverage may distort the outcome and accuracy of a regression. Cook's distance measures the effect of deleting a given observation. Points with a large Cook's distance are considered to merit closer examination in the analysis.
For the algebraic expression, first define
} = \underset \quad \underset \quad + \quad \underset
where
\boldsymbol{\varepsilon}\siml{N}\left(0,\sigma2I\right)
is the
error term,
\boldsymbol{\beta}=\left[\beta0\beta1...\betap-1\right]T
is the coefficient matrix,
is the number of covariates or predictors for each observation, and
is the
design matrix including a constant. The
least squares estimator then is
, and consequently the fitted (predicted) values for the mean of
are
} = \mathbf \mathbf = \mathbf \left(\mathbf^ \mathbf \right)^ \mathbf^ \mathbf = \mathbf \mathbf
where
is the
projection matrix (or hat matrix). The
-th diagonal element of
, given by
,
[4] is known as the
leverage of the
-th observation. Similarly, the
-th element of the residual vector
} = \left(\mathbf - \mathbf \right) \mathbf is denoted by
.
Cook's distance
of observation
is defined as the sum of all the changes in the regression model when observation
is removed from it
[5] Di=
-\widehat{y}j(i)\right)2}{ps2}
where p is the rank of the model (i.e., number of independent variables in the design matrix) and
is the fitted response value obtained when excluding
, and
is the
mean squared error of the regression model.
[6] Equivalently, it can be expressed using the leverage[5] (
):
Detecting highly influential observations
There are different opinions regarding what cut-off values to use for spotting highly influential points. Since Cook's distance is in the metric of an F distribution with
and
(as defined for the design matrix
above) degrees of freedom, the median point (i.e.,
) can be used as a cut-off.
[7] Since this value is close to 1 for large
, a simple operational guideline of
has been suggested.
[8] The
-dimensional random vector
, which is the change of
due to a deletion of the
-th observation, has a covariance matrix of rank one and therefore it is distributed entirely over one dimensional subspace (a line, say
) of the
-dimensional space. The distributional property of
mentioned above implies that information about the influence of the
-th observation provided by
should be obtained not from outside of the line
but from the line
itself.However, in the introduction of Cook’s distance, a scaling matrix of full rank
is chosen and as a result
is treated as if it is a random vector distributed over the whole space of
dimensions. This means that information about the influence of the
-th observation provided by
through the Cook’s distance comes from the whole space of
dimensions.Hence the Cook's distance measure is likely to distort the real influence of observations, misleading the right identification of influential observations.
[9] [10] Relationship to other influence measures (and interpretation)
can be expressed using the leverage
[5] (
) and the square of the
internally Studentized residual (
), as follows:
\begin{align}
Di&=
⋅
=
⋅
(1-hii)} ⋅
\\[5pt]&=
⋅
⋅
.\end{align}
The benefit in the last formulation is that it clearly shows the relationship between
and
to
(while p and n are the same for all observations). If
is large then it (for non-extreme values of
) will increase
. If
is close to 0 then
will be small, while if
is close to 1 then
will become very large (as long as
, i.e.: that the observation
is not exactly on the regression line that was fitted without observation
).
is related to
DFFITS through the following relationship (note that
{\widehat{\sigma}\over\widehat{\sigma}(i)
} t_i = t_ is the
externally studentized residual, and
\widehat{\sigma},\widehat{\sigma}(i)
are defined here):
\begin{align}
Di&=
⋅
⋅
\\
&=
⋅
2}{\widehat{\sigma}2} ⋅
| \widehat{\sigma |
2}{\widehat{\sigma} |
⋅
⋅
=
⋅
2}{\widehat{\sigma}2} ⋅ \left(ti(i)\sqrt{
}\right)^2 \\&= \frac \cdot \frac \cdot \text^2\end
can be interpreted as the distance one's estimates move within the confidence ellipsoid that represents a region of plausible values for the parameters. This is shown by an alternative but equivalent representation of Cook's distance in terms of changes to the estimates of the regression parameters between the cases, where the particular observation is either included or excluded from the regression analysis.
An alternative to
has been proposed. Instead of considering the influence a single observation has on the overall model, the statistics
serves as a measure of how sensitive the prediction of the
-th observation is to the deletion of each observation in the original data set. It can be formulated as a weighted linear combination of the
's of all data points. Again, the
projection matrix is involved in the calculation to obtain the required weights:
Si=
i}(j)\right)2}{ps2hii
} = \sum_^n \frac = \sum_^n \rho_^2 \cdot D_j
In this context,
(
) resembles the correlation between the predictions
and
.
In contrast to
, the distribution of
is asymptotically normal for large sample sizes and models with many predictors. In absence of outliers the expected value of
is approximately
. An influential observation can be identified if
\left|Si-\operatorname{med}(S)\right|\geq4.5 ⋅ \operatorname{MAD}(S)
with
as the
median and
as the
median absolute deviation of all
-values within the original data set, i.e., a robust measure of location and a
robust measure of scale for the distribution of
. The factor 4.5 covers approx. 3
standard deviations of
around its centre.
When compared to Cook's distance,
was found to perform well for high- and intermediate-leverage outliers, even in presence of masking effects for which
failed.
[11] Interestingly,
and
are closely related because they can both be expressed in terms of the matrix
which holds the effects of the deletion of the
-th data point on the
-th prediction:
\begin{align}
&T=\left[\begin{matrix}\widehat{y}1-{\widehat{y}1
}_ & \widehat_-_ & \widehat_-_ & \cdots & \widehat_-_ & \widehat_-_ \\ \widehat_-_ & \widehat_-_ & \widehat_-_ & \cdots & \widehat_-_ & \widehat_-_ \\ \vdots & \vdots & \vdots &\ddots & \vdots & \vdots \\ \widehat_-_ & \widehat_-_ & \widehat_-_ & \cdots & \widehat_-_ & \widehat_-_ \\ \widehat_-_ & \widehat_-_ & \widehat_-_ & \cdots & \widehat_-_ & \widehat_-_ \end\right] \\ \\&\ \ = \mathbf\mathbf\mathbf = \mathbf \left[\begin{matrix} e_1 & 0 & 0 & \cdots & 0 & 0 \\ 0 & e_2 & 0 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & e_{n-1} & 0 \\ 0 & 0 & 0 & \cdots & 0 & e_n \end{matrix}\right] \left[\begin{matrix} \frac 1 {1-h_{11}} & 0 & 0 & \cdots & 0 & 0 \\ 0 & \frac 1 {1-h_{22}} & 0 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & \frac 1 {1-h_{n-1,n-1}} & 0 \\ 0 & 0 & 0 & \cdots & 0 & \frac 1 {1-h_{nn}} \end{matrix}\right]\end
With
at hand,
is given by:
D=\left[\begin{matrix}D1\ D2\ \vdots\ Dn-1\ Dn\end{matrix}\right]=
\operatorname{diag}\left(TTT\right)=
\operatorname{diag}\left(GEHTHEG\right)=\operatorname{diag}(M)
where
if
is
symmetric and
idempotent, which is not necessarily the case. In contrast,
can be calculated as:
\begin{align}
&S=\left[\begin{matrix}S1\ S2\ \vdots\ Sn-1\ Sn\end{matrix}\right]=
F\operatorname{diag}\left(TTT\right)=
\left[\begin{matrix}
&0&0& … &0&0\ 0&
&0& … &0&0\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\ 0&0&0& … &
&0\ 0&0&0& … &0&
\end{matrix}\right]\operatorname{diag}\left(TTT\right)\ \\
& =
F\operatorname{diag}\left(HEGGEHT\right)=F\operatorname{diag}(P)
\end{align}
where
extracts the main diagonal of a square matrix
. In this context,
is referred to as the influence matrix whereas
resembles the so-called sensitivity matrix. An
eigenvector analysis of
and
- which both share the same eigenvalues – serves as a tool in outlier detection, although the eigenvectors of the sensitivity matrix are more powerful.
[12] Software implementations
Many programs and statistics packages, such as R, Python, Julia, etc., include implementations of Cook's distance.
Extensions
High-dimensional Influence Measure (HIM) is an alternative to Cook's distance for when
(i.e., when there are more predictors than observations).
[13] While the Cook's distance quantifies the individual observation's influence on the least squares regression coefficient estimate, the HIM measures the influence of an observation on the marginal correlations.
See also
Further reading
- Book: Anthony . Atkinson . Marco . Riani . Deletion Diagnostics . Robust Diagnostics and Regression Analysis . New York . Springer . 2000 . 0-387-95017-6 . 22–25 . https://books.google.com/books?id=X0dPBOJ_L4UC&pg=PA22 .
- Book: Heiberger . Richard M. . Burt . Holland . Case Statistics . Statistical Analysis and Data Display . Springer Science & Business Media . 2013 . 9781475742848 . 312–27 . https://books.google.com/books?id=co3gBwAAQBAJ&pg=PA312 .
- Book: William S. . Krasker . Edwin . Kuh . Edwin Kuh . Roy E. . Welsch . Estimation for dirty data and flawed models . Handbook of Econometrics . 1 . Elsevier . 1983 . 651–698 . 10.1016/S1573-4412(83)01015-6 . 9780444861856 .
- Aguinis. Herman . Gottfredson. Ryan K. . Joo. Harry. 2013 . Best-Practice Recommendations for Defining Identifying and Handling Outliers . Organizational Research Methods . Sage . 16 . 2 . 270–301 . 10.1177/1094428112470848 . 54916947 . 4 December 2015.
Notes and References
- Book: Mendenhall . William . Sincich . Terry . A Second Course in Statistics: Regression Analysis . 5th . 1996 . Prentice-Hall . Upper Saddle River, NJ . 0-13-396821-9 . 422 . A measure of overall influence an outlying observation has on the estimated
coefficients was proposed by R. D. Cook (1979). Cook's distance, Di, is calculated....
- Cook . R. Dennis . Detection of Influential Observations in Linear Regression . Technometrics . 19 . 1 . 15–18 . February 1977 . . 0436478 . 10.2307/1268249 . 1268249 .
- Cook . R. Dennis . Influential Observations in Linear Regression . . 74 . 365 . 169–174 . March 1979 . American Statistical Association . 0529533 . 10.2307/2286747 . 2286747 . 11299/199280 . free .
- Book: Hayashi, Fumio . Econometrics . Princeton University Press . 2000 . 21–23 . 1400823838 .
- Web site: Cook's Distance.
- Web site: Statistics 512: Applied Linear Models . Purdue University . 2016-03-25 . https://web.archive.org/web/20161130150249/http://www.stat.purdue.edu/~jennings/stat514/stat512notes/topic3.pdf#page=9 . 2016-11-30 . dead .
- Book: Bollen . Kenneth A. . Kenneth A. Bollen . Jackman . Robert W. . 1990 . Regression Diagnostics: An Expository Treatment of Outliers and Influential Cases . Fox . John . Long . J. Scott Long . J. Scott . Modern Methods of Data Analysis . 266 . Newbury Park, CA . Sage . 0-8039-3366-5 . https://archive.org/details/modernmethodsofd0000unse/page/266 .
- Book: Cook . R. Dennis . Weisberg . Sanford . Sanford Weisberg . 1982 . Residuals and Influence in Regression . New York, NY . Chapman & Hall . 0-412-24280-X . 11299/37076.
- Myung Geun. Kim. A cautionary note on the use of Cook's distance. Communications for Statistical Applications and Methods. 31 May 2017. 2383-4757. 317–324. 24. 3. 10.5351/csam.2017.24.3.317. free.
- https://arxiv.org/abs/2012.14127v3 On deletion diagnostic statistic in regression
- Daniel . Peña . A New Statistic for Influence in Linear Regression . Technometrics . 47 . 1 . 1–12 . 2005 . . 10.1198/004017004000000662. 1802937 .
- Book: Peña
, Daniel
. Springer Handbook of Engineering Statistics . 523–536 . Hoang . Pham . 2006 . Springer London . 10.1007/978-1-84628-288-1 . 978-1-84628-288-1 . 60460007 .
- https://projecteuclid.org/euclid.aos/1384871348 High-dimensional influence measure