Leverage (statistics) explained

In statistics and in particular in regression analysis, leverage is a measure of how far away the independent variable values of an observation are from those of the other observations. High-leverage points, if any, are outliers with respect to the independent variables. That is, high-leverage points have no neighboring points in

Rp

space, where

{p}

is the number of independent variables in a regression model. This makes the fitted model likely to pass close to a high leverage observation.[1] Hence high-leverage points have the potential to cause large changes in the parameter estimates when they are deleted i.e., to be influential points. Although an influential point will typically have high leverage, a high leverage point is not necessarily an influential point. The leverage is typically defined as the diagonal elements of the hat matrix.

Definition and interpretations

Consider the linear regression model

{y}i=

\top
\boldsymbol{x}
i

\boldsymbol{\beta}+{\varepsilon}i

,

i=1,2,\ldots,n

. That is,

\boldsymbol{y}=X\boldsymbol{\beta}+\boldsymbol{\varepsilon}

, where,

X

is the

n x p

design matrix whose rows correspond to the observations and whose columns correspond to the independent or explanatory variables. The leverage score for the

{i}th

independent observation

\boldsymbol{x}i

is given as:

hii=\left[H\right]ii=

\top
\boldsymbol{x}
i

\left(X\topX\right)-1\boldsymbol{x}i

, the

{i}th

diagonal element of the ortho-projection matrix (a.k.a hat matrix)

H=X\left(X\topX\right)-1X\top

.Thus the

{i}th

leverage score can be viewed as the 'weighted' distance between

\boldsymbol{x}i

to the mean of

\boldsymbol{x}i

's (see its relation with Mahalanobis distance). It can also be interpreted as the degree by which the

{i}th

measured (dependent) value (i.e.,

yi

) influences the

{i}th

fitted (predicted) value (i.e.,

\widehat{y}i

): mathematically,

hii=

\partial\widehat{y
i}{\partial

yi}

.

Hence, the leverage score is also known as the observation self-sensitivity or self-influence.[2] Using the fact that

{\boldsymbol\widehat{y}}={H}{\boldsymboly}

(i.e., the prediction

{\boldsymbol\widehat{y}}

is ortho-projection of

{\boldsymboly}

onto range space of

X

) in the above expression, we get

hii=\left[H\right]ii

. Note that this leverage depends on the values of the explanatory variables

(X)

of all observations but not on any of the values of the dependent variables

(yi)

.

Properties

  1. The leverage  

hii

is a number between 0 and 1,

0\leqhii\leq1.


Proof: Note that

H

is idempotent matrix (

H2=H

) and symmetric (

hij=hji

). Thus, by using the fact that

\left[H2\right]ii=\left[H\right]ii

, we have

hii

2+\sum
=h
ji
2
h
ij
. Since we know that

\sumj

2
h
ij

\geq0

, we have

hii\geq

2
h
ii

\implies0\leqhii\leq1

.
  1. Sum of leverages is equal to the number of parameters

(p)

in

\boldsymbol{\beta}

(including the intercept).
Proof:
n
\sum
i=1

hii=\operatorname{Tr}(H) =\operatorname{Tr}\left(X\left(X\topX\right)-1X\top\right) =\operatorname{Tr}\left(X\topX\left(X\topX\right)-1\right) =\operatorname{Tr}(Ip)=p

.

Determination of outliers in X using leverages

Large leverage

{hii

} corresponds to an

{{\boldsymbol{x}}i

} that is extreme. A common rule is to identify

{{\boldsymbol{x}}i

} whose leverage value

{h}ii

is more than 2 times larger than the mean leverage
n
\bar{h}=\dfrac{1}{n}\sum
i=1

hii=\dfrac{p}{n}

(see property 2 above). That is, if

hii>2\dfrac{p}{n}

,

{{\boldsymbol{x}}i

} shall be considered an outlier. Some statisticians prefer the threshold of

3p/{n}

instead of

2p/{n}

.

Relation to Mahalanobis distance

Leverage is closely related to the Mahalanobis distance (proof[3]). Specifically, for some

n x p

matrix

X

, the squared Mahalanobis distance of

{{\boldsymbol{x}}i

} (where

{\boldsymbol

\top
{x}}
i
is

{i}th

row of

X

) from the vector of mean
n
\widehat{\boldsymbol{\mu}}=\sum
i=1

\boldsymbol{x}i

of length

p

, is
2(\boldsymbol{x}
D
i

)=(\boldsymbol{x}i-\widehat{\boldsymbol{\mu}})\topS-1(\boldsymbol{x}i-\widehat{\boldsymbol{\mu}})

, where

S=X\topX

is the estimated covariance matrix of

{{\boldsymbol{x}}i

}'s. This is related to the leverage

hii

of the hat matrix of

X

after appending a column vector of 1's to it. The relationship between the two is:
2(\boldsymbol{x}
D
i

)=(n-1)(hii-\tfrac{1}{n})

This relationship enables us to decompose leverage into meaningful components so that some sources of high leverage can be investigated analytically.[4]

Relation to influence functions

In a regression context, we combine leverage and influence functions to compute the degree to which estimated coefficients would change if we removed a single data point. Denoting the regression residuals as

\widehat{e}i=yi-

\top
\boldsymbol{x}
i

\widehat\boldsymbol{\beta}

, one can compare the estimated coefficient

\widehat\boldsymbol{\beta}

to the leave-one-out estimated coefficient

\widehat\boldsymbol{\beta}(-i)

using the formula [5] [6]

\widehat\boldsymbol{\beta}-\widehat\boldsymbol{\beta}(-i)=

(X\topX)-1\boldsymbol{x
i\widehat{e}

i}{1-hii

}

Young (2019) uses a version of this formula after residualizing controls.[7] To gain intuition for this formula, note that

\partial\hat{\beta
} = (\mathbf^\mathbf)^\boldsymbol_i captures the potential for an observation to affect the regression parameters, and therefore

(X\topX)-1\boldsymbol{x}i\widehat{e}i

captures the actual influence of that observations' deviations from its fitted value on the regression parameters. The formula then divides by

(1-hii)

to account for the fact that we remove the observation rather than adjusting its value, reflecting the fact that removal changes the distribution of covariates more when applied to high-leverage observations (i.e. with outlier covariate values). Similar formulas arise when applying general formulas for statistical influences functions in the regression context.[8] [9]

Effect on residual variance

If we are in an ordinary least squares setting with fixed

X

and homoscedastic regression errors

\varepsiloni,

\boldsymbol{y}=X\boldsymbol{\beta}+\boldsymbol{\varepsilon};  \operatorname{Var}(\boldsymbol{\varepsilon})=\sigma2I

, then the

{i}th

regression residual,

ei=yi-\widehat{y}i

has variance

\operatorname{Var}(ei)=(1-hii)\sigma2

.In other words, an observation's leverage score determines the degree of noise in the model's misprediction of that observation, with higher leverage leading to less noise. This follows from the fact that

I-H

is idempotent and symmetric and

\widehat{\boldsymbol{y}}=H\boldsymbol{y}

, hence,

\operatorname{Var}(\boldsymbol{e})=\operatorname{Var}((I-H)\boldsymbol{y}) =(I-H)\operatorname{Var}(\boldsymbol{y})(I-H)\top =\sigma2(I-H)2=\sigma2(I-H)

.

The corresponding studentized residual—the residual adjusted for its observation-specific estimated residual variance—is then

ti={ei\over\widehat{\sigma}\sqrt{1-hii}}

where

\widehat{\sigma}

is an appropriate estimate of

\sigma

.

Partial leverage

Partial leverage (PL) is a measure of the contribution of the individual independent variables to the total leverage of each observation. That is, PL is a measure of how

hii

changes as a variable is added to the regression model. It is computed as:

\left(PLj\right)i=

\left(X
2
\right)
i
j\bullet[j]
n\left(X
\sum
2
\right)
k
j\bullet[j]

where

j

is the index of independent variable,

i

is the index of observation and

Xj\bullet[j]

are the residuals from regressing

Xj

against the remaining independent variables. Note that the partial leverage is the leverage of the

{i}th

point in the partial regression plot for the

{j}th

variable. Data points with large partial leverage for an independent variable can exert undue influence on the selection of that variable in automatic regression model building procedures.

Software implementations

Many programs and statistics packages, such as R, Python, etc., include implementations of Leverage.

Language/Program Function Notes
hat(x, intercept = TRUE) or hatvalues(model, ...) See https://stat.ethz.ch/R-manual/R-devel/library/stats/html/influence.measures.html
Python (x * np.linalg.pinv(x).T).sum(-1) See https://gist.github.com/gabrieldernbach/ff81b0d826782719c8057eb64a3fcb18

See also

Notes and References

  1. Book: Everitt, B. S. . 2002 . Cambridge Dictionary of Statistics . Cambridge University Press . 0-521-81099-X .
  2. Web site: Data Assimilation: Observation influence diagnostic of a data assimilation system . C. . Cardinali . June 2013 .
  3. https://stats.stackexchange.com/q/200566 Prove the relation between Mahalanobis distance and Leverage?
  4. 2006.04024. math.ST. M. G.. Kim. Sources of high leverage in linear regression model (Journal of Applied Mathematics and Computing, Vol 16, 509–513). 2004.
  5. Miller. Rupert G.. September 1974. An Unbalanced Jackknife. Annals of Statistics. EN. 2. 5. 880–891. 10.1214/aos/1176342811. 0090-5364. free.
  6. Book: Hiyashi, Fumio. Econometrics. Princeton University Press. 2000. 21.
  7. Young. Alwyn. 2019. Channeling Fisher: Randomization Tests and the Statistical Insignificance of Seemingly Significant Experimental Results. The Quarterly Journal of Economics. 134. 2 . 567. 10.1093/qje/qjy029 . free.
  8. Chatterjee. Samprit. Hadi. Ali S.. August 1986. Influential Observations, High Leverage Points, and Outliers in Linear Regression. Statistical Science. EN. 1. 3. 379–393. 10.1214/ss/1177013622. 0883-4237. free.
  9. Web site: regression - Influence functions and OLS. 2020-12-06. Cross Validated.