In statistics, G-tests are likelihood-ratio or maximum likelihood statistical significance tests that are increasingly being used in situations where chi-squared tests were previously recommended.[1]
The general formula for G is
G=2\sumi{Oi ⋅ ln\left(
Oi | |
Ei |
\right)},
where is the observed count in a cell, is the expected count under the null hypothesis, denotes the natural logarithm, and the sum is taken over all non-empty cells. The resulting is chi-squared distributed.
Furthermore, the total observed count should be equal to the total expected count:where is the total number of observations.
We can derive the value of the G-test from the log-likelihood ratio test where the underlying model is a multinomial model.
Suppose we had a sample where each is the number of times that an object of type was observed. Furthermore, let be the total number of objects observed. If we assume that the underlying model is multinomial, then the test statistic is defined bywhere is the null hypothesis and
\hat{\theta}
\tilde{\theta}i
\begin{alignat}{2} G&=& -2
m | |
\sum | |
i=1 |
Oiln\left(
Ei | |
Oi |
\right)\\ &=&2
m | |
\sum | |
i=1 |
Oiln\left(
Oi | |
Ei |
\right) \end{alignat}
Heuristically, one can imagine
~Oi~
~OilnOi\to0~,
~Ei>0~\foralli~
Given the null hypothesis that the observed frequencies result from random sampling from a distribution with the given expected frequencies, the distribution of G is approximately a chi-squared distribution, with the same number of degrees of freedom as in the corresponding chi-squared test.
For very small samples the multinomial test for goodness of fit, and Fisher's exact test for contingency tables, or even Bayesian hypothesis selection are preferable to the G-test.[2] McDonald recommends to always use an exact test (exact test of goodness-of-fit, Fisher's exact test) if the total sample size is less than 1 000 .
There is nothing magical about a sample size of 1 000, it's just a nice round number that is well within the range where an exact test, chi-square test, and G–test will give almost identical values. Spreadsheets, web-page calculators, and SAS shouldn't have any problem doing an exact test on a sample size of 1 000 .
— John H. McDonald[2]
G-tests have been recommended at least since the 1981 edition of Biometry, a statistics textbook by Robert R. Sokal and F. James Rohlf.[3]
The commonly used chi-squared tests for goodness of fit to a distribution and for independence in contingency tables are in fact approximations of the log-likelihood ratio on which the G-tests are based.[4]
The general formula for Pearson's chi-squared test statistic is
\chi2=\sumi{
| |||||||||||||
Ei |
The approximation of G by chi squared is obtained by a second order Taylor expansion of the natural logarithm around 1 (see
G ≈ \chi2
~Oi~
~Ei~.
~\chi2~
~\chi2~
For samples of a reasonable size, the G-test and the chi-squared test will lead to the same conclusions. However, the approximation to the theoretical chi-squared distribution for the G-test is better than for the Pearson's chi-squared test.[5] In cases where
~Oi>2 ⋅ Ei~
For testing goodness-of-fit the G-test is infinitely more efficient than the chi squared test in the sense of Bahadur, but the two tests are equally efficient in the sense of Pitman or in the sense of Hodges and Lehmann.[6] [7]
Consider
G=2\sumi{Oiln\left(
Oi | |
Ei |
\right)}~,
Oi=Ei+\deltai
\sumi\deltai=0~,
G=2\sumi{(Ei+\deltai)ln\left(1+
\deltai | |
Ei |
\right)}~.
1+ | \deltai |
Ei |
ln(1+x)=x-
1 | |
2 |
x2+l{O}(x3)
G=2\sumi(Ei+\deltai)\left(
\deltai | |
Ei |
-
1 | |
2 |
| |||||||
|
+
3\right) | |
l{O}\left(\delta | |
i |
\right)~,
G=2\sumi\deltai+
1 | |
2 |
| |||||||
Ei |
+
3\right)~. | |
l{O}\left(\delta | |
i |
~\sumi\deltai=0~
~\deltai=Oi-Ei~,
~G ≈ \sumi
| |||||||||||||
Ei |
~.
The G-test statistic is proportional to the Kullback–Leibler divergence of the theoretical distribution from the empirical distribution:
\begin{align} G&=2\sumi{Oi ⋅ ln\left(
Oi | |
Ei |
\right)}=2N\sumi{oi ⋅ ln\left(
oi | |
ei |
\right)}\\ &=2NDKL(o\|e), \end{align}
where N is the total number of observations and
oi
ei
For analysis of contingency tables the value of G can also be expressed in terms of mutual information.
Let
N=\sumij{Oij
\piij=
Oij | |
N |
\pii.=
\sumjOij | |
N |
\pi.=
\sumiOij | |
N |
Then G can be expressed in several alternative forms:
G=2 ⋅ N ⋅ \sumij{\piij\left(ln(\piij)-ln(\pii.)-ln(\pi.j)\right)},
G=2 ⋅ N ⋅ \left[H(r)+H(c)-H(r,c)\right],
G=2 ⋅ N ⋅ \operatorname{MI}(r,c),
where the entropy of a discrete random variable
X
H(X)=-{\sumxp(x)logp(x)},
\operatorname{MI}(r,c)=H(r)+H(c)-H(r,c)
It can also be shown that the inverse document frequency weighting commonly used for text retrieval is an approximation of G applicable when the row sum for the query is much smaller than the row sum for the remainder of the corpus. Similarly, the result of Bayesian inference applied to a choice of single multinomial distribution for all rows of the contingency table taken together versus the more general alternative of a separate multinomial per row produces results very similar to the G statistic.
g.test
which works exactly like chisq.test
from base R. R also has the likelihood.test function in the Deducer package. Note: Fisher's G-test in the GeneCycle Package of the R programming language (fisher.g.test
) does not implement the G-test as described in this article, but rather Fisher's exact test of Gaussian white-noise in a time series.[10]Gstat
for the standard G statistic and the associated p-value and Gstatindep
for the G statistic applied to comparing joint and product distributions to test independence./chisq
option after the proc freq
.[11]lr
option after the tabulate
command.org.apache.commons.math3.stat.inference.GTest
.[12]scipy.stats.power_divergence
with lambda_=0
.[13]