Šidák correction for t-test explained

One of the application of Student's t-test is to test the location of one sequence of independent and identically distributed random variables. If we want to test the locations of multiple sequences of such variables, Šidák correction should be applied in order to calibrate the level of the Student's t-test. Moreover, if we want to test the locations of nearly infinitely many sequences of variables, then Šidák correction should be used, but with caution. More specifically, the validity of Šidák correction depends on how fast the number of sequences goes to infinity.

Introduction

Suppose we are interested in different hypotheses,

H1,...,Hm

, and would like to check if all of them are true. Now the hypothesis test scheme becomes

Hnull

: all of

Hi

are true;

Halternative

: at least one of

Hi

is false.

Let

\alpha

be the level of this test (the type-I error), that is, the probability that we falsely reject

Hnull

when it is true.

We aim to design a test with certain level

\alpha

.

Suppose when testing each hypothesis

Hi

, the test statistic we use is

ti

.

If these

ti

's are independent, then a test for

Hnull

can be developed by the following procedure, known as Šidák correction.

Step 1, we test each of null hypotheses at level

1
m
1-(1-\alpha)

.

Step 2, if any of these null hypotheses is rejected, we reject

Hnull

.

Finite case

For finitely many t-tests,suppose

Yij=\mui+\epsilonij,i=1,...,N,j=1,...,n,

where for each,

\epsiloni1,...,\epsilonin

are independently and identically distributed, for each

\epsilon1j,...,\epsilonNj

are independent but not necessarily identically distributed, and

\epsilonij

has finite fourth moment.

Our goal is to design a test for

Hnull:\mui=0,\foralli=1,...,N

with level . This test can be based on the t-statistic of each sequences, that is,

ti=

\bar{Y
i
},

where:

\bar{Y}i=

1
n
n
\sum
j=1

Yij,   

2
S=
i
1
n
n
\sum
j=1

(Yij-\bar{Y}i)2.

Using Šidák correction, we reject

Hnull

if any of the t-tests based on the t-statistics above reject at level
1
N
1-(1-\alpha)

.

More specifically, we reject

Hnull

when

\existsi\in\{1,\ldots,N\}:|ti|>\zeta\alpha,N,

where

P(|Z|>\zeta\alpha,N

1
N
)=1-(1-\alpha)

,    Z\simN(0,1)

The test defined above has asymptotic level, because

\begin{align} level&=Pnull\left(rejectHnull\right)\\ &=Pnull\left(\existsi\in\{1,\ldots,N\}:|ti|>\zeta\alpha,N\right)\\ &=1-Pnull\left(\foralli\in\{1,\ldots,N\}:|ti|\leq\zeta\alpha,N\right)

N
\\ &=1-\prod
i=1

Pnull\left(|ti|\leq\zeta\alpha,N\right)\\ &\to

N
1-\prod
i=1

P\left(|Zi|\leq\zeta\alpha,N\right)&&Zi\simN(0,1)\\ &=\alpha\end{align}

Infinite case

In some cases, the number of sequences,

N

, increase as the data size of each sequences,

n

, increase. In particular, suppose

N(n)inftyasninfty

. If this is true, then we will need to test a null including infinitely many hypotheses, that is

H_: \text H_ \text i=1,2,....

To design a test, Šidák correction may be applied, as in the case of finitely many t-test. However, when

N(n)inftyasninfty

, the Šidák correction for t-test may not achieve the level we want, that is, the true level of the test may not converges to the nominal level

\alpha

as n goes to infinity. This result is related to high-dimensional statistics and is proven by . Specifically, if we want the true level of the test converges to the nominal level

\alpha

, then we need a restraint on how fast

N(n)infty

. Indeed,

\epsilonij

have distribution symmetric about zero, then it is sufficient to require

logN=o(n1/3)

to guarantee the true level converges to

\alpha

.

\epsilonij

are asymmetric, then it is necessary to impose

logN=o(n1/2)

to ensure the true level converges to

\alpha

.

logN=o(n1/3)

even if

\epsilonij

has asymmetric distribution.

The results above are based on Central Limit Theorem. According to Central Limit Theorem, each of our t-statistics

ti

possesses asymptotic standard normal distribution, and so the difference between the distribution of each

ti

and the standard normal distribution is asymptotically negligible. The question is, if we aggregate all the differences between the distribution of each

ti

and the standard normal distribution, is this aggregation of differences still asymptotically ignorable?

When we have finitely many

ti

, the answer is yes. But when we have infinitely many

ti

, the answer some time becomes no. This is because in the latter case we are summing up infinitely many infinitesimal terms. If the number of the terms goes to infinity too fast, that is,

N(n)infty

too fast, then the sum may not be zero, the distribution of the t-statistics can not be approximated by the standard normal distribution, the true level does not converges to the nominal level

\alpha

, and then the Šidák correction fails.

See also