Heteroskedasticity-consistent standard errors explained

The topic of heteroskedasticity-consistent (HC) standard errors arises in statistics and econometrics in the context of linear regression and time series analysis. These are also known as heteroskedasticity-robust standard errors (or simply robust standard errors), Eicker–Huber–White standard errors (also Huber–White standard errors or White standard errors),[1] to recognize the contributions of Friedhelm Eicker,[2] Peter J. Huber,[3] and Halbert White.[4]

In regression and time-series modelling, basic forms of models make use of the assumption that the errors or disturbances ui have the same variance across all observation points. When this is not the case, the errors are said to be heteroskedastic, or to have heteroskedasticity, and this behaviour will be reflected in the residuals \widehat_i estimated from a fitted model. Heteroskedasticity-consistent standard errors are used to allow the fitting of a model that does contain heteroskedastic residuals. The first such approach was proposed by Huber (1967), and further improved procedures have been produced since for cross-sectional data, time-series data and GARCH estimation.

Heteroskedasticity-consistent standard errors that differ from classical standard errors may indicate model misspecification. Substituting heteroskedasticity-consistent standard errors does not resolve this misspecification, which may lead to bias in the coefficients. In most situations, the problem should be found and fixed.[5] Other types of standard error adjustments, such as clustered standard errors or HAC standard errors, may be considered as extensions to HC standard errors.

History

Heteroskedasticity-consistent standard errors are introduced by Friedhelm Eicker,[6] [7] and popularized in econometrics by Halbert White.

Problem

Consider the linear regression model for the scalar

y

.

y=x\top\boldsymbol{\beta}+\varepsilon,

where

x

is a k x 1 column vector of explanatory variables (features),

\boldsymbol{\beta}

is a k × 1 column vector of parameters to be estimated, and

\varepsilon

is the residual error.

The ordinary least squares (OLS) estimator is

\widehat\boldsymbol{\beta}OLS=(X\topX)-1X\topy.

where

y

is a vector of observations

yi

, and

X

denotes the matrix of stacked

xi

values observed in the data.

If the sample errors have equal variance

\sigma2

and are uncorrelated, then the least-squares estimate of

\boldsymbol{\beta}

is BLUE (best linear unbiased estimator), and its variance is estimated with

\hat{V

}\left[\widehat\boldsymbol\beta_\mathrm{OLS}\right] = s^2 (\mathbf^\mathbf)^, \quad s^2 = \frac

where

\widehat\varepsiloni=yi-

\top
x
i

\widehat\boldsymbol{\beta}OLS

are the regression residuals.

When the error terms do not have constant variance (i.e., the assumption of

E[uu\top]=\sigma2In

is untrue), the OLS estimator loses its desirable properties. The formula for variance now cannot be simplified:

V\left[\widehat\boldsymbol\betaOLS\right]=V[(X\topX)-1X\topy]=(X\topX)-1X\top\SigmaX(X\topX)-1

where

\Sigma=V[u].

While the OLS point estimator remains unbiased, it is not "best" in the sense of having minimum mean square error, and the OLS variance estimator

\hat{V

} \left[\widehat \boldsymbol{\beta}_\mathrm{OLS} \right] does not provide a consistent estimate of the variance of the OLS estimates.

For any non-linear model (for instance logit and probit models), however, heteroskedasticity has more severe consequences: the maximum likelihood estimates of the parameters will be biased (in an unknown direction), as well as inconsistent (unless the likelihood function is modified to correctly take into account the precise form of heteroskedasticity).[8] [9] As pointed out by Greene, “simply computing a robust covariance matrix for an otherwise inconsistent estimator does not give it redemption.”[10]

Solution

If the regression errors

\varepsiloni

are independent, but have distinct variances
2
\sigma
i
, then

\Sigma=

2,
\operatorname{diag}(\sigma
1

\ldots,

2)
\sigma
n
which can be estimated with
2
\widehat\sigma
i

=\widehat

2
\varepsilon
i
. This provides White's (1980) estimator, often referred to as HCE (heteroskedasticity-consistent estimator):

\begin{align} \hat{V

}_\text \big[\widehat \boldsymbol{\beta}_\text{OLS} \big] &= \frac \bigg(\frac \sum_i \mathbf_i \mathbf_i^ \bigg)^ \bigg(\frac \sum_i \mathbf_i \mathbf_i^\top \widehat_i^2 \bigg) \bigg(\frac \sum_i \mathbf_i \mathbf_i^ \bigg)^ \\&= (\mathbf^ \mathbf)^ (\mathbf^ \operatorname(\widehat \varepsilon_1^2, \ldots, \widehat \varepsilon_n^2) \mathbf) (\mathbf^ \mathbf)^,\end

where as above

X

denotes the matrix of stacked
\top
x
i
values from the data. The estimator can be derived in terms of the generalized method of moments (GMM).

Also often discussed in the literature (including White's paper) is the covariance matrix

\widehat\Omegan

of the

\sqrt{n}

-consistent limiting distribution:

\sqrt{n}(\widehat\boldsymbol{\beta}n-\boldsymbol{\beta})\xrightarrow{d}l{N}(0,\Omega),

where

\Omega=E[XX\top]-1V[X\boldsymbol{\varepsilon}]\operatornameE[XX\top]-1,

and

\begin{align} \widehat\Omegan&=(

1
n

\sumixi

\top
x
i

)-1(

1
n

\sumixi

\top
x
i

\widehat

2
\varepsilon
i

)(

1
n

\sumixi

\top
x
i

)-1\\ &=n(X\topX)-1(X\top\operatorname{diag}(\widehat

2,
\varepsilon
1

\ldots,\widehat

2)
\varepsilon
n

X)(X\topX)-1\end{align}

Thus,

\widehat\Omegan=n\hat{V

}_\text[\widehat \boldsymbol{\beta}_\text{OLS}]

and

\widehatV[X\boldsymbol{\varepsilon}]=

1
n

\sumixi

\top
x
i

\widehat

2
\varepsilon
i

=

1
n

X\top\operatorname{diag}(\widehat

2,
\varepsilon
1

\ldots,\widehat

2)
\varepsilon
n

X.

Precisely which covariance matrix is of concern is a matter of context.

Alternative estimators have been proposed in MacKinnon & White (1985) that correct for unequal variances of regression residuals due to different leverage.[11] Unlike the asymptotic White's estimator, their estimators are unbiased when the data are homoscedastic.

Of the four widely available different options, often denoted as HC0-HC3, the HC3 specification appears to work best, with tests relying on the HC3 estimator featuring better power and closer proximity to the targeted size, especially in small samples. The larger the sample, the smaller the difference between the different estimators.[12]

An alternative to explicitly modelling the heteroskedasticity is using a resampling method such as the wild bootstrap. Given that the studentized bootstrap, which standardizes the resampled statistic by its standard error, yields an asymptotic refinement,[13] heteroskedasticity-robust standard errors remain nevertheless useful.

Instead of accounting for the heteroskedastic errors, most linear models can be transformed to feature homoskedastic error terms (unless the error term is heteroskedastic by construction, e.g. in a linear probability model). One way to do this is using weighted least squares, which also features improved efficiency properties.

See also

Software

Further reading

Notes and References

  1. Web site: Kleiber . C. . Zeileis . A. . 2006 . Applied Econometrics with R . UseR-2006 conference . https://web.archive.org/web/20070422030316/http://www.r-project.org/useR-2006/Slides/Kleiber%2BZeileis.pdf . April 22, 2007 . dead .
  2. Book: Eicker, Friedhelm . Limit Theorems for Regression with Unequal and Dependent Errors . Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability . 1967 . 5 . 1 . 59–82 . http://projecteuclid.org/euclid.bsmsp/1200512981 . 0214223 . 0217.51201 .
  3. Book: Huber, Peter J.. The behavior of maximum likelihood estimates under nonstandard conditions. Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability. 1967. 5. 1. 221–233. http://projecteuclid.org/euclid.bsmsp/1200512988. 0216620. 0212.21504.
  4. White . Halbert . A Heteroskedasticity-Consistent Covariance Matrix Estimator and a Direct Test for Heteroskedasticity . . 48 . 817–838 . 1980 . 10.2307/1912934 . 4 . 575027 . 1912934 . 10.1.1.11.7646 .
  5. King. Gary. Roberts. Margaret E.. 2015. How Robust Standard Errors Expose Methodological Problems They Do Not Fix, and What to Do About It. Political Analysis. en. 23. 2. 159–179. 10.1093/pan/mpu015. 1047-1987.
  6. Asymptotic Normality and Consistency of the Least Squares Estimators for Families of Linear Regressions. 1963. 10.1214/aoms/1177704156. Eicker. F.. The Annals of Mathematical Statistics. 34. 2. 447–456. free.
  7. Limit theorems for regressions with unequal and dependent errors. Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Statistics. January 1967. 5. 1. 59–83. Eicker. Friedhelm.
  8. Web site: Dave . Giles . Robust Standard Errors for Nonlinear Models . Econometrics Beat . May 8, 2013 .
  9. Michael . Guggisberg . Misspecified Discrete Choice Models and Huber-White Standard Errors . . 2019 . 8 . 1 . 10.1515/jem-2016-0002 .
  10. Book: Greene, William H. . William Greene (economist) . Econometric Analysis . Seventh . Boston . Pearson Education . 2012 . 978-0-273-75356-8 . 692–693 .
  11. MacKinnon . James G. . James G. MacKinnon . White . Halbert . Halbert White . Some Heteroskedastic-Consistent Covariance Matrix Estimators with Improved Finite Sample Properties . . 29 . 3 . 305–325 . 1985 . 10.1016/0304-4076(85)90158-7 . 10419/189084 . free .
  12. Long . J. Scott . Ervin . Laurie H. . 2000 . Using Heteroscedasticity Consistent Standard Errors in the Linear Regression Model . The American Statistician . 54 . 3 . 217–224 . 10.2307/2685594 . 0003-1305.
  13. Book: C., Davison, Anthony . Bootstrap methods and their application . 2010 . Cambridge Univ. Press . 978-0-521-57391-7 . 740960962.
  14. Web site: EViews 8 Robust Regression.
  15. https://github.com/gragusa/CovarianceMatrices.jl CovarianceMatrices: Robust Covariance Matrix Estimators
  16. Web site: Heteroskedasticity and autocorrelation consistent covariance estimators . Econometrics Toolbox .
  17. https://cran.r-project.org/web/packages/sandwich/index.html sandwich: Robust Covariance Matrix Estimators
  18. Book: Christian . Kleiber . Achim . Zeileis . Applied Econometrics with R . New York . Springer . 2008 . 978-0-387-77316-2 . 106–110 .
  19. See online help for _robust option and regress command.
  20. Web site: Robust covariance matrix estimation . Gretl User's Guide, chapter 19 .