In statistics, DFFIT and DFFITS ("difference in fit(s)") are diagnostics meant to show how influential a point is in a linear regression, first proposed in 1980.[1]
DFFIT is the change in the predicted value for a point, obtained when that point is left out of the regression:
DFFIT=\widehat{y}i-\widehat{y}i(i)
where
\widehat{y}i
\widehat{y}i(i)
DFFITS is the Studentized DFFIT, where Studentization is achieved by dividing by the estimated standard deviation of the fit at that point:
DFFITS=
DFFIT | |
s(i)\sqrt{hii |
where
s(i)
hii
DFFITS also equals the products of the externally Studentized residual (
ti(i)
\sqrt{hii/(1-hii)}
DFFITS=ti(i)\sqrt{
hii | |
1-hii |
Thus, for low leverage points, DFFITS is expected to be small, whereas as the leverage goes to 1 the distribution of the DFFITS value widens infinitely.
For a perfectly balanced experimental design (such as a factorial design or balanced partial factorial design), the leverage for each point is p/n, the number of parameters divided by the number of points. This means that the DFFITS values will be distributed (in the Gaussian case) as
\sqrt{p\overn-p} ≈ \sqrt{p\overn}
2\sqrt{p\overn}
Although the raw values resulting from the equations are different, Cook's distance and DFFITS are conceptually identical and there is a closed-form formula to convert one value to the other.[3]
Previously when assessing a dataset before running a linear regression, the possibility of outliers would be assessed using histograms and scatterplots. Both methods of assessing data points were subjective and there was little way of knowing how much leverage each potential outlier had on the results data. This led to a variety of quantitative measures, including DFFIT, DFBETA.