In statistics, extensions of Fisher's method are a group of approaches that allow approximately valid statistical inferences to be made when the assumptions required for the direct application of Fisher's method are not valid. Fisher's method is a way of combining the information in the p-values from different statistical tests so as to form a single overall test: this method requires that the individual test statistics (or, more immediately, their resulting p-values) should be statistically independent.
A principal limitation of Fisher's method is its exclusive design to combine independent p-values, which renders it an unreliable technique to combine dependent p-values. To overcome this limitation, a number of methods were developed to extend its utility.
Fisher's method showed that the log-sum of k independent p-values follow a χ2-distribution with 2k degrees of freedom:[1] [2]
X=
k | |
-2\sum | |
i=1 |
loge(pi)\sim\chi2(2k).
In the case that these p-values are not independent, Brown proposed the idea of approximating X using a scaled χ2-distribution, cχ2(k’), with k’ degrees of freedom.
The mean and variance of this scaled χ2 variable are:
\operatorname{E}[c\chi2(k')]=ck',
\operatorname{Var}[c\chi2(k')]=2c2k'.
where
c=\operatorname{Var}(X)/(2\operatorname{E}[X])
k'=2(\operatorname{E}[X])2/\operatorname{Var}(X)
See main article: harmonic mean p-value. The harmonic mean p-value offers an alternative to Fisher's method for combining p-values when the dependency structure is unknown but the tests cannot be assumed to be independent.[3] [4]
This method requires the test statistics' covariance structure to be known up to a scalar multiplicative constant.
This is conceptually similar to Fisher's method: it computes a sum of transformed p-values. Unlike Fisher's method, which uses a log transformation to obtain a test statistic which has a chi-squared distribution under the null, the Cauchy combination test uses a tan transformation to obtain a test statistic whose tail is asymptotic to that of a Cauchy distribution under the null. The test statistic is:
X=
k | |
\sum | |
i=1 |
\omegai\tan[(0.5-pi)\pi],
where
\omegai
k | |
\sum | |
i=1 |
\omegai=1
pi
\tan[(0.5-pi)\pi]
pi
\limt
P[X>t] | |
P[W>t] |
=1.
This leads to a combined hypothesis test, in which X is compared to the quantiles of the Cauchy distribution.[5]