Multivariate Behrens–Fisher problem explained

In statistics, the multivariate Behrens–Fisher problem is the problem of testing for the equality of means from two multivariate normal distributions when the covariance matrices are unknown and possibly not equal. Since this is a generalization of the univariate Behrens-Fisher problem, it inherits all of the difficulties that arise in the univariate problem.

Notation and problem formulation

Let

Xij\siml{N}p(\mui,\Sigmai)  (j=1,...,ni;  i=1,2)

be independent random samples from two

p

-variate normal distributions
with unknown mean vectors

\mui

and unknown dispersion matrices

\Sigmai

. The index

i

refers to the first or second population, and the

j

th observation from the

i

th population is

Xij

.

The multivariate Behrens–Fisher problem is to test the null hypothesis

H0

that the means are equal versus the alternative

H1

of non-equality:

H0:\mu1=\mu2  vs  H1:\mu1\mu2.

Define some statistics, which are used in the various attempts to solve the multivariate Behrens–Fisher problem, by

\begin{align} \bar{Xi}&=

1
ni
ni
\sum
j=1

Xij,\\ Ai&=

ni
\sum
j=1

(Xij-\bar{Xi})(Xij-\bar{Xi})',\\ Si&=

1
ni-1

Ai,\\ \tilde{Si}&=

1
ni

Si,\\ \tilde{S}&=\tilde{S1}+\tilde{S2},and\\ T2&=(\bar{X1}-

-1
\bar{X
2})'\tilde{S}

(\bar{X1}-\bar{X2}). \end{align}

The sample means

\bar{Xi}

and sum-of-squares matrices

Ai

are sufficient for the multivariate normal parameters

\mui,\Sigmai,(i=1,2)

, so it suffices to perform inference be based on just these statistics. The distributions of

\bar{Xi}

and

Ai

are independent and are, respectively, multivariate normal and Wishart:

\begin{align} \bar{Xi}&\siml{N}p\left(\mui,\Sigmai/ni\right),\\ Ai&\simWp(\Sigmai,ni-1). \end{align}

Background

In the case where the dispersion matrices are equal, the distribution of the

T2

statistic is known to be an F distribution under the null and a noncentral F-distribution under the alternative.

The main problem is that when the true values of the dispersion matrix are unknown, then under the null hypothesis the probability of rejecting

H0

via a

T2

test depends on the unknown dispersion matrices. In practice, this dependency harms inference when the dispersion matrices are far from each other or when the sample size is not large enough to estimate them accurately.

Now, the mean vectors are independently and normally distributed,

\bar{Xi}\siml{N}p\left(\mui,\Sigmai/ni\right),

but the sum

A1+A2

does not follow the Wishart distribution, which makes inference more difficult.

Proposed solutions

Proposed solutions are based on a few main strategies:

T2

statistic and which have an approximate

F

distribution
with estimated degrees of freedom (df).

Approaches using the T2 with approximate degrees of freedom

Below,

tr

indicates the trace operator.

Yao (1965)

(as cited by)

T2\sim

\nup
\nu-p+1

Fp,\nu-p+1,

where

\begin{align} \nu&=\left[

1\left(
n1
\bar{X
d'\tilde{S}

-1\tilde{S}1\tilde{S}-1\bar{Xd}}

-1
{\bar{X}
d'\tilde{S}
2
\bar{X}
d} \right)

+

1\left(
n2
\bar{X
d

'\tilde{S}-1\tilde{S}2\tilde{S}-1

-1
X
d
} \right)^ \right]^, \\

\bar_d & = \bar_-\bar_2. \end

Johansen (1980)

(as cited by)

T2\simqFp,\nu,

where

\begin{align} q&=p+2D-

6D
p(p-1)+2

,\\ \nu&=

p(p+2)
3D

,\\ \end{align}

and

\begin{align} D=

1
2
2
\sum
i=1
1
ni

\{ &tr\left[{\left(I-

-1
(\tilde{S}
1

+

-1
\tilde{S}
2

)-1

-1
\tilde{S}
i

\right)}2\right]\\ &{}+{\left[tr\left(I

-1
-(\tilde{S}
1

+

-1
\tilde{S}
2

)-1

-1
\tilde{S}
i

\right)\right]}2\}.\\ \end{align}

Nel and Van der Merwe's (1986)

(as cited by)

T2\sim

\nup
\nu-p+1

Fp,\nu-p+1,

where

\nu=

tr(\tilde{S
2)

+[tr(\tilde{S})]2} {

1
n1

\left\{tr(\tilde{S1

}^2) + [\mathrm{tr}(\tilde{S_1})]^2\right \} + \frac \left\}.

Comments on performance

Kim (1992) proposed a solution that is based on a variant of

T2

. Although its power is high, the fact that it is not invariant makes it less attractive. Simulation studies by Subramaniam and Subramaniam (1973) show that the size of Yao's test is closer to the nominal level than that of James's. Christensen and Rencher (1997) performed numerical studies comparing several of these testing procedures and concluded that Kim and Nel and Van der Merwe's tests had the highest power. However, these two procedures are not invariant.

Krishnamoorthy and Yu (2004)

Krishnamoorthy and Yu (2004) proposed a procedure which adjusts in Nel and Var der Merwe (1986)'s approximate df for the denominator of

T2

under the null distribution to make it invariant. They show that the approximate degrees of freedom lies in the interval

\left[min\{n1-1,n2-1\},n1+n2-2\right]

to ensure that the degrees of freedom is not negative. They report numerical studies that indicate that their procedure is as powerful as Nel and Van der Merwe's test for smaller dimension, and more powerful for larger dimension. Overall, they claim that their procedure is the better than the invariant procedures of Yao (1965) and Johansen (1980). Therefore, Krishnamoorthy and Yu's (2004) procedure has the best known size and power as of 2004.

The test statistic

T2

in Krishnmoorthy and Yu's procedure follows the distribution

T2\sim\nupFp,\nu-p+1/(\nu-p+1),

where

\nu=

p+p2
1\{tr[(\tilde{S
n1-1

1\tilde{S}-1

2]+[tr(\tilde{S}
)
1

\tilde{S}-1)]2\}+

1
n2-1

\{tr[(\tilde{S}2\tilde{S}-1

2]+[tr(\tilde{S}
)
2

\tilde{S}-1)]2\} }.

References