In statistics, the matrix normal distribution or matrix Gaussian distribution is a probability distribution that is a generalization of the multivariate normal distribution to matrix-valued random variables.
The probability density function for the random matrix X (n × p) that follows the matrix normal distribution
l{MN}n,p(M,U,V)
p(X\midM,U,V)=
| ||||||
(2\pi)np/2|V|n/2|U|p/2 |
where
tr
Rn x
dx11dx21...dxn1dx12...dxn2...dxnp
The matrix normal is related to the multivariate normal distribution in the following way:
X\siml{MN}n x (M,U,V),
if and only if
vec(X)\siml{N}np(vec(M),V ⊗ U)
where
⊗
vec(M)
M
The equivalence between the above matrix normal and multivariate normal density functions can be shown using several properties of the trace and Kronecker product, as follows. We start with the argument of the exponent of the matrix normal PDF:
\begin{align} & - | 12tr\left[ |
V |
-1(X-M)TU-1(X-M)\right]\\ &=-
12vec\left(X | |
- |
M\right)T vec\left(U-1(X-M)V-1\right)\\ &=-
12vec\left(X | |
- |
M\right)T \left(V-1 ⊗ U-1\right)vec\left(X-M\right)\\ &=-
12\left[vec(X) | |
- |
vec(M)\right]T \left(V ⊗ U\right)-1\left[vec(X)-vec(M)\right]\end{align}
Rn
|V ⊗ U|=|V|n|U|p.
If
X\siml{MN}n x (M,U,V)
The mean, or expected value is:
E[X]=M
E[(X-M)(X-M)T] =U\operatorname{tr}(V)
E[(X-M)T(X-M)] =V\operatorname{tr}(U)
\operatorname{tr}
More generally, for appropriately dimensioned matrices A,B,C:
\begin{align} E[XAXT] &=U\operatorname{tr}(ATV)+MAMT\\ E[XTBX] &=V\operatorname{tr}(UBT)+MTBM\\ E[XCX] &=VCTU+MCM \end{align}
Transpose transform:
XT\siml{MN}p x (MT,V,U)
Linear transform: let D (r-by-n), be of full rank r ≤ n and C (p-by-s), be of full rank s ≤ p, then:
DXC\siml{MN}r x (DMC,DUDT,CTVC)
Let's imagine a sample of n independent p-dimensional random variables identically distributed according to a multivariate normal distribution:
Yi\siml{N}p({\boldsymbol\mu},{\boldsymbol\Sigma})withi\in\{1,\ldots,n\}
X
Yi
X\siml{MN}n(M,U,V)
M
{\boldsymbol\mu}
M=1n x {\boldsymbol\mu}T
U
V={\boldsymbol\Sigma}
Given k matrices, each of size n × p, denoted
X1,X2,\ldots,Xk
k | |
\prod | |
i=1 |
l{MN}n x (Xi\midM,U,V).
M=
1 | |
k |
kX | |
\sum | |
i |
U=
1 | |
kp |
-1 | |
\sum | |
i-M)V |
T | |
(X | |
i-M) |
V=
1 | |
kn |
TU | |
\sum | |
i-M) |
-1(Xi-M),
l{MN}n x (X\midM,U,V)=l{MN}n x (X\midM,sU,\tfrac{1}{s}V).
Sampling from the matrix normal distribution is a special case of the sampling procedure for the multivariate normal distribution. Let
X
X\siml{MN}n x (0,I,I).
Y=M+AXB,
Y\siml{MN}n x (M,AAT,BTB),
Dawid (1981) provides a discussion of the relation of the matrix-valued normal distribution to other distributions, including the Wishart distribution, inverse-Wishart distribution and matrix t-distribution, but uses different notation from that employed here.