Wald test explained
In statistics, the Wald test (named after Abraham Wald) assesses constraints on statistical parameters based on the weighted distance between the unrestricted estimate and its hypothesized value under the null hypothesis, where the weight is the precision of the estimate.[1] [2] Intuitively, the larger this weighted distance, the less likely it is that the constraint is true. While the finite sample distributions of Wald tests are generally unknown,[3] it has an asymptotic χ2-distribution under the null hypothesis, a fact that can be used to determine statistical significance.[4]
Together with the Lagrange multiplier test and the likelihood-ratio test, the Wald test is one of three classical approaches to hypothesis testing. An advantage of the Wald test over the other two is that it only requires the estimation of the unrestricted model, which lowers the computational burden as compared to the likelihood-ratio test. However, a major disadvantage is that (in finite samples) it is not invariant to changes in the representation of the null hypothesis; in other words, algebraically equivalent expressions of non-linear parameter restriction can lead to different values of the test statistic.[5] [6] That is because the Wald statistic is derived from a Taylor expansion,[7] and different ways of writing equivalent nonlinear expressions lead to nontrivial differences in the corresponding Taylor coefficients.[8] Another aberration, known as the Hauck–Donner effect,[9] can occur in binomial models when the estimated (unconstrained) parameter is close to the boundary of the parameter space—for instance a fitted probability being extremely close to zero or one—which results in the Wald test no longer monotonically increasing in the distance between the unconstrained and constrained parameter.[10] [11]
Mathematical details
Under the Wald test, the estimated
that was found as the
maximizing argument of the unconstrained
likelihood function is compared with a hypothesized value
. In particular, the squared difference
is weighted by the curvature of the log-likelihood function.
Test on a single parameter
If the hypothesis involves only a single parameter restriction, then the Wald statistic takes the following form:
W=
0)}2}{\operatorname{var}(\hat\theta)}
which under the null hypothesis follows an asymptotic χ
2-distribution with one degree of freedom. The square root of the single-restriction Wald statistic can be understood as a (pseudo)
t-ratio that is, however, not actually
t-distributed except for the special case of linear regression with
normally distributed errors.
[12] In general, it follows an asymptotic
z distribution.
[13] \sqrt{W}=
0}{\operatorname{se}(\hat\theta)}
where
\operatorname{se}(\widehat\theta)
is the
standard error (SE) of the
maximum likelihood estimate (MLE), the square root of the variance. There are several ways to
consistently estimate the
variance matrix which in finite samples leads to alternative estimates of standard errors and associated test statistics and
p-values.
[3] The validity of still getting an asymptotically normal distribution after plugin-in the
MLE estimator of
into the
SE relies on
Slutsky's theorem.
Test(s) on multiple parameters
The Wald test can be used to test a single hypothesis on multiple parameters, as well as to test jointly multiple hypotheses on single/multiple parameters. Let
be our sample estimator of
P parameters (i.e.,
is a
vector), which is supposed to follow asymptotically a normal distribution with
covariance matrix V,
\sqrt{n}(\hat{\theta}n-\theta)\xrightarrow{l{D}}N(0,V)
.The test of
Q hypotheses on the
P parameters is expressed with a
matrix
R:
The distribution of the test statistic under the null hypothesis is
(R\hat{\theta}n-r)'[R(\hat{V}
(R\hat{\theta}n-r)/Q \xrightarrow{l{D}} F(Q,n-P) \xrightarrow[n →
/Q,
which in turn implies
(R\hat{\theta}n-r)'[R(\hat{V}
(R\hat{\theta}n-r) \xrightarrow[n →
,
where
is an estimator of the covariance matrix.
[14] Suppose
\sqrt{n}(\hat{\theta}n-\theta)\xrightarrow{l{D}}N(0,V)
. Then, by
Slutsky's theorem and by the properties of the normal distribution, multiplying by R has distribution:
R\sqrt{n}(\hat{\theta}n-\theta)=\sqrt{n}(R\hat{\theta}n-r)\xrightarrow{l{D}}N(0,RVR')
Recalling that a quadratic form of normal distribution has a Chi-squared distribution:
| -1 |
\sqrt{n}(R\hat{\theta} | |
| n-r)'[RVR'] |
\sqrt{n}(R\hat{\theta}n-r)
Rearranging n finally gives:
| -1 |
(R\hat{\theta} | |
| n-r)'[R(V/n)R'] |
(R\hat{\theta}n-r)
| 2 |
\xrightarrow{l{D}} \chi | |
| Q |
of
such that
has a determinant that is distributed
, then by the independence of the covariance estimator and equation above, we have:
(R\hat{\theta}n-r)'[R(\hat{V}
(R\hat{\theta}n-r)/Q \xrightarrow{l{D}} F(Q,n-P)
Nonlinear hypothesis
In the standard form, the Wald test is used to test linear hypotheses that can be represented by a single matrix R. If one wishes to test a non-linear hypothesis of the form:
The test statistic becomes:
c\left(\hat{\theta}n\right)'\left[c'\left(\hat{\theta}n\right)\left(\hat{V}n/n\right)c'\left(\hat{\theta}n\right)'\right]-1c\left(\hat{\theta}n\right)
| 2 |
\xrightarrow{l{D}} \chi | |
| Q |
where
is the
derivative of c evaluated at the sample estimator. This result is obtained using the
delta method, which uses a first order approximation of the variance.
Non-invariance to re-parameterisations
The fact that one uses an approximation of the variance has the drawback that the Wald statistic is not-invariant to a non-linear transformation/reparametrisation of the hypothesis: it can give different answers to the same question, depending on how the question is phrased.[15] For example, asking whether R = 1 is the same as asking whether log R = 0; but the Wald statistic for R = 1 is not the same as the Wald statistic for log R = 0 (because there is in general no neat relationship between the standard errors of R and log R, so it needs to be approximated).[16]
Alternatives to the Wald test
There exist several alternatives to the Wald test, namely the likelihood-ratio test and the Lagrange multiplier test (also known as the score test). Robert F. Engle showed that these three tests, the Wald test, the likelihood-ratio test and the Lagrange multiplier test are asymptotically equivalent.[17] Although they are asymptotically equivalent, in finite samples, they could disagree enough to lead to different conclusions.
There are several reasons to prefer the likelihood ratio test or the Lagrange multiplier to the Wald test:[18] [19] [20]
- Non-invariance: As argued above, the Wald test is not invariant under reparametrization, while the likelihood ratio tests will give exactly the same answer whether we work with R, log R or any other monotonic transformation of R.
- The other reason is that the Wald test uses two approximations (that we know the standard error or Fisher information and the maximum likelihood estimate), whereas the likelihood ratio test depends only on the ratio of likelihood functions under the null hypothesis and alternative hypothesis.
- The Wald test requires an estimate using the maximizing argument, corresponding to the "full" model. In some cases, the model is simpler under the null hypothesis, so that one might prefer to use the score test (also called Lagrange multiplier test), which has the advantage that it can be formulated in situations where the variability of the maximizing element is difficult to estimate or computing the estimate according to the maximum likelihood estimator is difficult; e.g. the Cochran–Mantel–Haenzel test is a score test.[21]
See also
Further reading
- Book: Greene, William H. . William Greene (economist) . Econometric Analysis . limited . Boston . Pearson . Seventh international . 2012 . 978-0-273-75356-8 . 155–161 .
- Book: Kmenta, Jan . Jan Kmenta . Elements of Econometrics . New York . Macmillan . 1986 . Second . 0-02-365070-2 . 492–493 . registration .
- Book: Thomas, R. L. . 73–77 . Introductory Econometrics: Theory and Application . London . Longman . 1993 . Second . 0-582-07378-2 .
External links
Notes and References
- Book: Ludwig . Fahrmeir . Thomas . Kneib . Stefan . Lang . Brian . Marx . Regression : Models, Methods and Applications . Berlin . Springer . 2013 . 978-3-642-34332-2 . 663 .
- Book: Michael D. . Ward . Michael D. Ward . John S. . Ahlquist . Maximum Likelihood for Social Science : Strategies for Analysis . . 2018 . 978-1-316-63682-4 . 36 .
- Book: Vance . Martin . Stan . Hurn . David . Harris . Econometric Modelling with Time Series: Specification, Estimation and Testing . Cambridge University Press . 2013 . 978-0-521-13981-6 .
- Book: Russell . Davidson . James G. . MacKinnon . The Method of Maximum Likelihood : Fundamental Concepts and Notation . Estimation and Inference in Econometrics . New York . Oxford University Press . 1993 . 0-19-506011-3 . 89 .
- Allan W. . Gregory . Michael R. . Veall . Formulating Wald Tests of Nonlinear Restrictions . . 53 . 6 . 1985 . 1465–1468 . 10.2307/1913221 . 1913221 .
- P. C. B. . Peter C. B. Phillips . Phillips . Joon Y. . Park . On the Formulation of Wald Tests of Nonlinear Restrictions . . 56 . 5 . 1988 . 1065–1083 . 10.2307/1911359 . 1911359 .
- Book: Hayashi, Fumio . Fumio Hayashi . Econometrics . Princeton . Princeton University Press . 2000 . 1-4008-2383-8 . 489–491 .,
- Francine . Lafontaine . Kenneth J. . White . Obtaining Any Wald Statistic You Want . . 21 . 1 . 1986 . 35–40 . 10.1016/0165-1765(86)90117-5 .
- Walter W. Jr. . Hauck . Allan . Donner . Wald's Test as Applied to Hypotheses in Logit Analysis . . 72 . 1977 . 360a . 851–853 . 10.1080/01621459.1977.10479969 .
- Book: Maxwell L. . King . Kim-Leng . Goh . Improvements to the Wald Test . Handbook of Applied Econometrics and Statistical Inference . New York . Marcel Dekker . 2002 . 0-8247-0652-8 . 251–276 . https://books.google.com/books?id=uJprdoXEb24C&pg=PA251 .
- Thomas William . Yee . On the Hauck–Donner Effect in Wald Tests: Detection, Tipping Points, and Parameter Space Characterization . Journal of the American Statistical Association . 2022 . 117 . 540 . 1763–1774 . 10.1080/01621459.2021.1886936 . 2001.08431 .
- Book: A. Colin . Cameron . A. Colin Cameron . Pravin K. . Trivedi . Microeconometrics : Methods and Applications . New York . Cambridge University Press . 2005 . 137 . 0-521-84805-9 .
- Book: Russell . Davidson . James G. . MacKinnon . The Method of Maximum Likelihood : Fundamental Concepts and Notation . Estimation and Inference in Econometrics . New York . Oxford University Press . 1993 . 0-19-506011-3 . 89 .
- Book: Harrell, Frank E. Jr. . 2001 . Regression modeling strategies . Springer-Verlag . New York . 0387952322 . Section 9.3.1 .
- Fears . Thomas R. . Benichou . Jacques . Gail . Mitchell H. . 1996 . A reminder of the fallibility of the Wald statistic . . 50 . 3 . 226–227 . 10.1080/00031305.1996.10474384 .
- Frank . Critchley . Paul . Marriott . Mark . Salmon . On the Differential Geometry of the Wald Test with Nonlinear Restrictions . . 64 . 5 . 1996 . 1213–1222 . 10.2307/2171963 . 2171963 . 1814/524 . free .
- Book: Engle, Robert F. . Handbook of Econometrics . Intriligator, M. D. . Griliches, Z. . Elsevier . 1983 . II . 796–801 . Wald, Likelihood Ratio, and Lagrange Multiplier Tests in Econometrics . 978-0-444-86185-6 .
- Book: Harrell, Frank E. Jr. . 2001 . Regression modeling strategies . Springer-Verlag . New York . 0387952322 . Section 9.3.3 .
- Book: Collett, David . Modelling Survival Data in Medical Research . London . 1994 . Chapman & Hall . 0412448807 .
- Book: Pawitan, Yudi . 2001 . In All Likelihood . New York . Oxford University Press . 0198507658 .
- Book: Agresti, Alan . 2002 . Categorical Data Analysis . limited . Wiley . 232 . 0471360937 . 2nd .