In statistics, the matrix t-distribution (or matrix variate t-distribution) is the generalization of the multivariate t-distribution from vectors to matrices.[1] [2]
The matrix t-distribution shares the same relationship with the multivariate t-distribution that the matrix normal distribution shares with the multivariate normal distribution: If the matrix has only one row, or only one column, the distributions become equivalent to the corresponding (vector-)multivariate distribution. The matrix t-distribution is the compound distribution that results from an infinite mixture of a matrix normal distribution with an inverse Wishart distribution placed over either of its covariance matrices, and the multivariate t-distribution can be generated in a similar way.
In a Bayesian analysis of a multivariate linear regression model based on the matrix normal distribution, the matrix t-distribution is the posterior predictive distribution.
For a matrix t-distribution, the probability density function at the point
X
n x p
f(X;\nu,M,\boldsymbol\Sigma,\boldsymbol\Omega)=K x \left|In+\boldsymbol\Sigma-1(X-M)\boldsymbol\Omega-1(X-M)\rm
| ||||
\right| |
,
where the constant of integration K is given by
K=
| ||||||||||||||||
|
| ||||
|\boldsymbol\Omega| |
| ||||
|\boldsymbol\Sigma| |
.
Here
\Gammap
The characteristic function and various other properties can be derived from the generalized matrix t-distribution (see below).
The generalized matrix t-distribution is a generalization of the matrix t-distribution with two parameters
\alpha
\beta
\nu
This reduces to the standard matrix t-distribution with
\beta=2,\alpha=
\nu+p-1 | |
2 |
.
The generalized matrix t-distribution is the compound distribution that results from an infinite mixture of a matrix normal distribution with an inverse multivariate gamma distribution placed over either of its covariance matrices.
If
X\sim{\rmT}n,p(\alpha,\beta,M,\boldsymbol\Sigma,\boldsymbol\Omega)
X\rm\sim{\rmT}p,n(\alpha,\beta,M\rm,\boldsymbol\Omega,\boldsymbol\Sigma).
The property above comes from Sylvester's determinant theorem:
\det\left(In+
\beta | |
2 |
\boldsymbol\Sigma-1(X-M)\boldsymbol\Omega-1(X-M)\rm\right)=
\det\left(Ip+
\beta | |
2 |
\boldsymbol\Omega-1(X\rm-M\rm)\boldsymbol\Sigma-1(X\rm-M\rm)\rm\right).
If
X\sim{\rmT}n,p(\alpha,\beta,M,\boldsymbol\Sigma,\boldsymbol\Omega)
A(n x n)
B(p x p)
AXB\sim{\rmT}n,p(\alpha,\beta,AMB,A\boldsymbol\SigmaA\rm,B\rm\boldsymbol\OmegaB) .
The characteristic function is[3]
\phiT(Z)=
\exp({\rmtr | |
(iZ'M))|\boldsymbol\Omega| |
\alphap | |
p(\alpha)(2\beta) |
where
B\delta(WZ)=|W|-\delta\intS>0\exp\left({\rm
| ||||
tr}(-SW-S-1Z)\right)|S| |
dS,
and where
B\delta