Hotelling's T-squared distribution explained

In statistics, particularly in hypothesis testing, the Hotelling's T-squared distribution (T2), proposed by Harold Hotelling, is a multivariate probability distribution that is tightly related to the F-distribution and is most notable for arising as the distribution of a set of sample statistics that are natural generalizations of the statistics underlying the Student's t-distribution.The Hotelling's t-squared statistic (t2) is a generalization of Student's t-statistic that is used in multivariate hypothesis testing.[1]

Motivation

The distribution arises in multivariate statistics in undertaking tests of the differences between the (multivariate) means of different populations, where tests for univariate problems would make use of a t-test.The distribution is named for Harold Hotelling, who developed it as a generalization of Student's t-distribution.[2]

Definition

If the vector

d

is Gaussian multivariate-distributed with zero mean and unit covariance matrix

N(0p,Ip,)

and

M

is a

p x p

random matrix with a Wishart distribution

W(Ip,,m)

with unit scale matrix and m degrees of freedom, and d and M are independent of each other, then the quadratic form

X

has a Hotelling distribution (with parameters

p

and

m

):[3]

X=mdTM-1d\simT2(p,m).

It can be shown that if a random variable X has Hotelling's T-squared distribution,

X\sim

2
T
p,m
, then:[2]
m-p+1
pm

X\simFp,m-p+1

where

Fp,m-p+1

is the F-distribution with parameters p and m - p + 1.

Hotelling t-squared statistic

Let

\hat{\Sigma}

be the sample covariance:

\hat{\Sigma}=

1
n-1
n
\sum
i=1

(xi-\overline{x

}) (\mathbf_i-\overline)'

where we denote transpose by an apostrophe. It can be shown that

\hat{\Sigma}

is a positive (semi) definite matrix and

(n-1)\hat{\Sigma}

follows a p-variate Wishart distribution with n − 1 degrees of freedom.[4] The sample covariance matrix of the mean reads

\hat{\Sigma}\overline{x}=\hat{\Sigma}/n

.[5]

The Hotelling's t-squared statistic is then defined as:[6]

-1
t
\overline{x}

(\overline{x}-\boldsymbol{\mu})=n(\overline{x}-\boldsymbol{\mu})'\hat{\Sigma}-1(\overline{x}-\boldsymbol{\mu}),

which is proportional to the Mahalanobis distance between the sample mean and

\boldsymbol{\mu}

. Because of this, one should expect the statistic to assume low values if

\overline{x}\boldsymbol{\mu}

, and high values if they are different.

From the distribution,

t2\sim

2
T=
p,n-1
p(n-1)
n-p

Fp,n-p,

where

Fp,n-p

is the F-distribution with parameters p and n − p.

In order to calculate a p-value (unrelated to p variable here), note that the distribution of

t2

equivalently implies that
n-p
p(n-1)

t2\simFp,n-p.

Then, use the quantity on the left hand side to evaluate the p-value corresponding to the sample, which comes from the F-distribution. A confidence region may also be determined using similar logic.

Motivation

Let

l{N}p(\boldsymbol{\mu},{\Sigma})

denote a p-variate normal distribution with location

\boldsymbol{\mu}

and known covariance

{\Sigma}

. Let

{x}1,...,{x}n\siml{N}p(\boldsymbol{\mu},{\Sigma})

be n independent identically distributed (iid) random variables, which may be represented as

p x 1

column vectors of real numbers. Define
\overline{
x}=x1+ … +xn
n

to be the sample mean with covariance

{\Sigma}\overline{x}={\Sigma}/n

. It can be shown that
-1
(\overline{x}-\boldsymbol{\mu})'{\Sigma}
\overline{x}
2
(\overline{x}-\boldsymbol{\mu})\sim\chi
p

,

where

2
\chi
p
is the chi-squared distribution with p degrees of freedom.[7]

Alternatively, one can argue using density functions and characteristic functions, as follows.

Two-sample statistic

If

{x}1,...,{x}

nx

\simNp(\boldsymbol{\mu},{\Sigma})

and

{y}1,...,{y}

ny

\simNp(\boldsymbol{\mu},{\Sigma})

, with the samples independently drawn from two independent multivariate normal distributions with the same mean and covariance, and we define
nx
\overline{
x}=1
nx
\sum
i=1

xi   

ny
\overline{
y}=1
ny
\sum
i=1

yi

as the sample means, and

\hat{\Sigma}x=

1
nx-1
nx
\sum
i=1

(xi-\overline{x})(xi-\overline{x})'

\hat{\Sigma}y=

1
ny-1
ny
\sum
i=1

(yi-\overline{y})(yi-\overline{y})'

as the respective sample covariance matrices. Then

\hat{\Sigma}=(nx-1)\hat{\Sigma
x

+(ny-1)\hat{\Sigma}y

}

is the unbiased pooled covariance matrix estimate (an extension of pooled variance).

Finally, the Hotelling's two-sample t-squared statistic is

t2=

nxny
nx+ny

(\overline{x}-\overline{y})'\hat{\Sigma}-1(\overline{x}-\overline{y}) \simT2(p,nx+ny-2)

Related concepts

It can be related to the F-distribution by[4]

nx+ny-p-1
(nx+ny-2)p

t2\simF(p,nx+ny-1-p).

The non-null distribution of this statistic is the noncentral F-distribution (the ratio of a non-central Chi-squared random variable and an independent central Chi-squared random variable)

nx+ny-p-1
(nx+ny-2)p

t2\simF(p,nx+ny-1-p;\delta),

with

\delta=

nxny
nx+ny

\boldsymbol{d}'\Sigma-1\boldsymbol{d},

where

\boldsymbol{d}=\overline{x-\overline{y}}

is the difference vector between the population means.

In the two-variable case, the formula simplifies nicely allowing appreciation of how the correlation,

\rho

, between the variables affects

t2

. If we define

d1=\overline{x}1-\overline{y}1,    d2=\overline{x}2-\overline{y}2

and

s1=\sqrt{\Sigma11

} \qquad s_2 = \sqrt \qquad \rho = \Sigma_/(s_1 s_2) = \Sigma_/(s_1 s_2)then

t2=

nxny
(n
2)
y)(1-\rho
x+n

\left[\left(

d1
s1

\right)2+\left(

d2
s2

\right)2-2\rho\left(

d1
s1

\right)\left(

d2
s2

\right)\right]

Thus, if the differences in the two rows of the vector

d=\overline{x}-\overline{y}

are of the same sign, in general,

t2

becomes smaller as

\rho

becomes more positive. If the differences are of opposite sign

t2

becomes larger as

\rho

becomes more positive.

A univariate special case can be found in Welch's t-test.

More robust and powerful tests than Hotelling's two-sample test have been proposed in the literature, see for example the interpoint distance based tests which can be applied also when the number of variables is comparable with, or even larger than, the number of subjects.[8] [9]

See also

Notes and References

  1. Book: Johnson, R.A.. Wichern, D.W.. 2002. Applied multivariate statistical analysis. 5. 8. Prentice hall.
  2. Harold Hotelling . H. . Hotelling . 1931 . The generalization of Student's ratio . . 2 . 3 . 360–378 . 10.1214/aoms/1177732979. free.
  3. Eric W. Weisstein, MathWorld
  4. Book: Mardia, K. V. . J. T. . Kent . J. M. . Bibby . 1979 . Multivariate Analysis . Academic Press . 978-0-12-471250-8 .
  5. Fogelmark . Karl . Lomholt . Michael . Irbäck . Anders . Ambjörnsson . Tobias . Fitting a function to time-dependent ensemble averaged data . Scientific Reports . 3 May 2018 . 8 . 1 . 6984 . 10.1038/s41598-018-24983-y . 19 August 2024.
  6. Web site: 6.5.4.3. Hotelling's T squared.
  7. End of chapter 4.2 of
  8. Marozzi. M.. Multivariate tests based on interpoint distances with application to magnetic resonance imaging. Statistical Methods in Medical Research. 25. 6. 2593–2610. 2016. 10.1177/0962280214529104. 24740998.
  9. Marozzi. M.. Multivariate multidistance tests for high-dimensional low sample size case-control studies. Statistics in Medicine. 2015. 34. 9. 1511–1526. 10.1002/sim.6418. 25630579.