Score test should not be confused with Test score.
In statistics, the score test assesses constraints on statistical parameters based on the gradient of the likelihood function—known as the score—evaluated at the hypothesized parameter value under the null hypothesis. Intuitively, if the restricted estimator is near the maximum of the likelihood function, the score should not differ from zero by more than sampling error. While the finite sample distributions of score tests are generally unknown, they have an asymptotic χ2-distribution under the null hypothesis as first proved by C. R. Rao in 1948,[1] a fact that can be used to determine statistical significance.
Since function maximization subject to equality constraints is most conveniently done using a Lagrangean expression of the problem, the score test can be equivalently understood as a test of the magnitude of the Lagrange multipliers associated with the constraints where, again, if the constraints are non-binding at the maximum likelihood, the vector of Lagrange multipliers should not differ from zero by more than sampling error. The equivalence of these two approaches was first shown by S. D. Silvey in 1959,[2] which led to the name Lagrange multiplier test that has become more commonly used, particularly in econometrics, since Breusch and Pagan's much-cited 1980 paper.[3]
The main advantage of the score test over the Wald test and likelihood-ratio test is that the score test only requires the computation of the restricted estimator.[4] This makes testing feasible when the unconstrained maximum likelihood estimate is a boundary point in the parameter space. Further, because the score test only requires the estimation of the likelihood function under the null hypothesis, it is less specific than the likelihood ratio test about the alternative hypothesis.[5]
Let
L
\theta
x
U(\theta)
U(\theta)= | \partiallogL(\theta\midx) |
\partial\theta |
.
The Fisher information is[6]
I(\theta)=-\operatorname{E}\left[\left.
\partial2 | |
\partial\theta2 |
logf(X;\theta)\right|\theta\right],
The statistic to test
l{H}0:\theta=\theta0
S(\theta0)=
| |||||||
I(\theta0) |
which has an asymptotic distribution of
2 | |
\chi | |
1 |
l{H}0
Note that some texts use an alternative notation, in which the statistic
S*(\theta)=\sqrt{S(\theta)}
\left( | \partiallogL(\theta\midx) |
\partial\theta |
\right) | |
\theta=\theta0 |
\geqC
L
\theta0
C
H0
H0
The score test is the most powerful test for small deviations from
H0
\theta=\theta0
\theta=\theta0+h
L(\theta0+h\midx) | |
L(\theta0\midx) |
\geqK;
Taking the log of both sides yields
logL(\theta0+h\midx)-logL(\theta0\midx)\geqlogK.
The score test follows making the substitution (by Taylor series expansion)
logL(\theta0+h\midx) ≈ logL(\theta0\midx)+h x \left(
\partiallogL(\theta\midx) | |
\partial\theta |
\right) | |
\theta=\theta0 |
and identifying the
C
log(K)
If the null hypothesis is true, the likelihood ratio test, the Wald test, and the Score test are asymptotically equivalent tests of hypotheses.[8] [9] When testing nested models, the statistics for each test then converge to a Chi-squared distribution with degrees of freedom equal to the difference in degrees of freedom in the two models. If the null hypothesis is not true, however, the statistics converge to a noncentral chi-squared distribution with possibly different noncentrality parameters.
A more general score test can be derived when there is more than one parameter. Suppose that
\widehat{\theta}0
\theta
H0
U
I
T(\widehat{\theta} | |
U | |
0) |
I-1(\widehat{\theta}0)U(\widehat{\theta}0)\sim
2 | |
\chi | |
k |
asymptotically under
H0
k
U(\widehat{\theta}0)=
\partiallogL(\widehat{\theta | |
0 |
\midx)}{\partial\theta}
and
I(\widehat{\theta}0)=-\operatornameE\left(
\partial2logL(\widehat{\theta | |
0 |
\midx)}{\partial\theta\partial\theta'}\right).
This can be used to test
H0
The actual formula for the test statistic depends on which estimator of the Fisher information matrix is being used.[10]
In many situations, the score statistic reduces to another commonly used statistic.[11]
In linear regression, the Lagrange multiplier test can be expressed as a function of the F-test.[12]
When the data follows a normal distribution, the score statistic is the same as the t statistic.
When the data consists of binary observations, the score statistic is the same as the chi-squared statistic in the Pearson's chi-squared test.