In statistics, the Holm - Bonferroni method,[1] also called the Holm method or Bonferroni–Holm method, is used to counteract the problem of multiple comparisons. It is intended to control the family-wise error rate (FWER) and offers a simple test uniformly more powerful than the Bonferroni correction. It is named after Sture Holm, who codified the method, and Carlo Emilio Bonferroni.
When considering several hypotheses, the problem of multiplicity arises: the more hypotheses are tested, the higher the probability of obtaining Type I errors (false positives). The Holm–Bonferroni method is one of many approaches for controlling the FWER, i.e., the probability that one or more Type I errors will occur, by adjusting the rejection criterion for each of the individual hypotheses.
The method is as follows:
m
P1,\ldots,Pm
H1,\ldots,Hm
\alpha
P1\leq\alpha/m
H1
P2\leq\alpha/(m-1)
H2
Pk\leq
\alpha | |
m+1-k |
Hk
This method ensures that the FWER is at most
\alpha
The simple Bonferroni correction rejects only null hypotheses with p-value less than or equal to
\alpha | |
m |
\alpha
The Holm–Bonferroni method also controls the FWER at
\alpha
\alpha | |
m |
\alpha
\alpha | |
m |
,
\alpha | |
m-1 |
,\ldots,
\alpha | |
2 |
,
\alpha | |
1 |
k
H(1),\ldots,H(k-1)
H(k),...,H(m)
k=1
k
Let
H(1)\ldotsH(m)
P(1)\leqP(2)\leq … \leqP(m)
I0
m0
Claim: If we wrongly reject some true hypothesis, there is a true hypothesisfor whichH(\ell)
at mostP(\ell)
.
\alpha m0 First note that, in this case, there is at least one true hypothesis, so
. Letm0\geq1
be such that\ell
is the first rejected true hypothesis. ThenH(\ell)
are all rejected false hypotheses. It follows thatH(1),\ldots,H(\ell-1)
and, hence,\ell-1\leqm-m0
(1). Since
1 m-\ell+1 \leq
1 m0 is rejected, it must beH(\ell)
by definition of the testing procedure. Using (1), we conclude thatP(\ell)\leq
\alpha m-\ell+1 , as desired.P(\ell)\leq
\alpha m0
So let us define the random event
A=
cup | |
i\inI0 |
\left\{Pi\leq
\alpha | |
m0 |
\right\}
i\inIo
Hi
P\left(\left\{Pi\leq
\alpha | |
m0 |
\right\}\right)=
\alpha | |
m0 |
\Pr(A)\leq\sum | |
i\inI0 |
P\left(\left\{Pi\leq
\alpha | |
m0 |
\right\}\right)=
\sum | |
i\inI0 |
\alpha | |
m0 |
=\alpha
\alpha
The Holm–Bonferroni method can be viewed as a closed testing procedure,[2] with the Bonferroni correction applied locally on each of the intersections of null hypotheses.
The closure principle states that a hypothesis
Hi
H1,\ldots,Hm
\alpha
Hi
\alpha
The Holm–Bonferroni method is a shortcut procedure, since it makes
m
2m
In the Holm–Bonferroni procedure, we first test
H(1)
m | |
cap\nolimits | |
i=1 |
Hi
H1,\ldots,Hm
If
H(1)
\alpha/m
H(1)
P(1)
m
\alpha/m
The same rationale applies for
H(2)
H(1)
H(2)
H(1)
P(2)\leq\alpha/(m-1)
H(2)
The same applies for each
1\leqi\leqm
Consider four null hypotheses
H1,\ldots,H4
p1=0.01
p2=0.04
p3=0.03
p4=0.005
\alpha=0.05
H4=H(1)
p4=p(1)=0.005
\alpha/4=0.0125
p1=p(2)=0.01<0.0167=\alpha/3
H1=H(2)
H3
p3=p(3)=0.03>0.025=\alpha/2
H1
H4
H2
H3
\alpha=0.05
p2=p(4)=0.04<0.05=\alpha
H2
When the hypothesis tests are not negatively dependent, it is possible to replace
\alpha | , | |
m |
\alpha | |
m-1 |
,\ldots,
\alpha | |
1 |
1-(1-\alpha)1/m,1-(1-\alpha)1/(m-1),\ldots,1-(1-\alpha)1
Let
P(1),\ldots,P(m)
H(i)
0\leqw(i)
P(i)
H(i)
P(j)\leq
w(j) | |||||||||
|
\alpha, j=1,\ldots,i
The adjusted p-values for Holm–Bonferroni method are:
\widetilde{p}(i)=maxj\leq\left\{(m-j+1)p(j)\right\}1,where\{x\}1\equivmin(x,1).
In the earlier example, the adjusted p-values are
\widetilde{p}1=0.03
\widetilde{p}2=0.06
\widetilde{p}3=0.06
\widetilde{p}4=0.02
H1
H4
\alpha=0.05
Similar adjusted p-values for Holm-Šidák method can be defined recursively as
\widetilde{p}(i)=max\left\{\widetilde{p}(i-1),1-(1-p(i))m-i+1\right\}
\widetilde{p}(1)=1-(1-p(1))m
1-(1-\alpha)1/n<\alpha/n
n\geq2
The weighted adjusted p-values are:
\widetilde{p}(i)=maxj\leq\left\{
| ||||||||||
The Holm–Bonferroni method is "uniformly" more powerful than the classic Bonferroni correction, meaning that it is always at least as powerful.
There are other methods for controlling the FWER that are more powerful than Holm–Bonferroni. For instance, in the Hochberg procedure, rejection of
H(1)\ldotsH(k)
k
P(k)\leq
\alpha | |
m+1-k |
Carlo Emilio Bonferroni did not take part in inventing the method described here. Holm originally called the method the "sequentially rejective Bonferroni test", and it became known as Holm–Bonferroni only after some time. Holm's motives for naming his method after Bonferroni are explained in the original paper: "The use of the Boole inequality within multiple inference theory is usually called the Bonferroni technique, and for this reason we will call our test the sequentially rejective Bonferroni test."