In statistics, the Wishart distribution is a generalization of the gamma distribution to multiple dimensions. It is named in honor of John Wishart, who first formulated the distribution in 1928.[1] Other names include Wishart ensemble (in random matrix theory, probability distributions over matrices are usually called "ensembles"), or Wishart–Laguerre ensemble (since its eigenvalue distribution involve Laguerre polynomials), or LOE, LUE, LSE (in analogy with GOE, GUE, GSE).
It is a family of probability distributions defined over symmetric, positive-definite random matrices (i.e. matrix-valued random variables). These distributions are of great importance in the estimation of covariance matrices in multivariate statistics. In Bayesian statistics, the Wishart distribution is the conjugate prior of the inverse covariance-matrix of a multivariate-normal random-vector.[2]
Suppose is a matrix, each column of which is independently drawn from a -variate normal distribution with zero mean:
G=
n) | |
(g | |
i |
\siml{N}p(0,V).
Then the Wishart distribution is the probability distribution of the random matrix [3]
S=GGT=
n | |
\sum | |
i=1 |
gi
T | |
g | |
i |
known as the scatter matrix. One indicates that has that probability distribution by writing
S\simWp(V,n).
The positive integer is the number of degrees of freedom. Sometimes this is written . For the matrix is invertible with probability if is invertible.
If then this distribution is a chi-squared distribution with degrees of freedom.
The Wishart distribution arises as the distribution of the sample covariance matrix for a sample from a multivariate normal distribution. It occurs frequently in likelihood-ratio tests in multivariate statistical analysis. It also arises in the spectral theory of random matrices and in multidimensional Bayesian analysis.[4] It is also encountered in wireless communications, while analyzing the performance of Rayleigh fading MIMO wireless channels .[5]
The Wishart distribution can be characterized by its probability density function as follows:
Let be a symmetric matrix of random variables that is positive semi-definite. Let be a (fixed) symmetric positive definite matrix of size .
Then, if, has a Wishart distribution with degrees of freedom if it has the probability density function
fX(X)=
1 | |
2np/2\left|{V |
\right|n/2
\Gamma | ||||
|
\right)}{\left|X\right|}(n-p-1)/2
| |||||
e |
({V}-1X)}
where
\left|{X}\right|
X
\Gammap\left(
n | |
2 |
\right)=\pip(p-1)/4
p | |
\prod | |
j=1 |
\Gamma\left(
n | |
2 |
-
j-1 | |
2 |
\right).
The density above is not the joint density of all the
p2
Xij=Xji
p(p+1)/2
Xij
i\lej
x;
The joint-eigenvalue density for the eigenvalues
λ1,...,λp\ge0
X\simWp(I,n)
cn,p
| |||||
e |
\prod
(n-p-1)/2 | |
λ | |
i |
\prodi<j|λi-λj|
where
cn,p
In fact the above definition can be extended to any real . If, then the Wishart no longer has a density - instead it represents a singular distribution that takes values in a lower-dimension subspace of the space of matrices.[7]
In Bayesian statistics, in the context of the multivariate normal distribution, the Wishart distribution is the conjugate prior to the precision matrix, where is the covariance matrix.[8]
The least informative, proper Wishart prior is obtained by setting .
The prior mean of is, suggesting that a reasonable choice for would be, where is some prior guess for the covariance matrix.
The following formula plays a role in variational Bayes derivations for Bayes networksinvolving the Wishart distribution. From equation (2.63),[9]
\operatorname{E}[ln\left|X\right|]=
\psi | ||||
|
+pln(2)+ln|V|
where
\psip
The following variance computation could be of help in Bayesian statistics:
\operatorname{Var}\left[ln\left|X\right|
p | |
\right]=\sum | |
i=1 |
\psi | ||||
|
where
\psi1
The information entropy of the distribution has the following formula:
\operatorname{H}\left[X\right]=-ln\left(B(V,n)\right)-
n-p-1 | |
2 |
\operatorname{E}\left[ln\left|X\right|\right]+
np | |
2 |
where is the normalizing constant of the distribution:
B(V,n)=
1 | |||||||||||||
|
.
This can be expanded as follows:
\begin{align} \operatorname{H}\left[X\right]&=
n | |
2 |
ln\left|V\right|+
np | |
2 |
ln2+ln\Gammap\left(
n | |
2 |
\right)-
n-p-1 | |
2 |
\operatorname{E}\left[ln\left|X\right|\right]+
np | |
2 |
\\[8pt] &=
n | |
2 |
ln\left|V\right|+
np | |
2 |
ln2+
ln\Gamma | ||||
|
\right)-
n-p-1 | |
2 |
\left(\psip\left(
n | |
2 |
\right)+pln2+ln\left|V\right|\right)+
np | |
2 |
\\[8pt] &=
n | |
2 |
ln\left|V\right|+
np | |
2 |
ln2+
ln\Gamma | ||||
|
-
n-p-1 | |
2 |
\psi | ||||
|
\right)-
n-p-1 | |
2 |
\left(pln2+ln\left|V\right|\right)+
np | |
2 |
\\[8pt] &=
p+1 | |
2 |
ln\left|V\right|+
1 | |
2 |
p(p+1)ln2+
ln\Gamma | ||||
|
-
n-p-1 | |
2 |
\psi | ||||
|
\right)+
np | |
2 |
\end{align}
The cross-entropy of two Wishart distributions
p0
n0,V0
p1
n1,V1
\begin{align} H(p0,p1)&=
\operatorname{E} | |
p0 |
[-logp1]\\[8pt] &=
\operatorname{E} | |
p0 |
\left[-log
| |||||||||||||
Note that when
p0=p1
n0=n1
The Kullback–Leibler divergence of
p1
p0
\begin{align} DKL(p0\|p1)&=H(p0,p1)-H(p0)\\[6pt] &=-
n1 | |
2 |
log
-1 | |
|V | |
1 |
V0|+
n0 | |
2 |
-1 | |
(\operatorname{tr}(V | |
1 |
V0)-p)+log
| ||||||||||
|
+\tfrac{n0-n1}2
\psi | ||||
|
The characteristic function of the Wishart distribution is
\Theta\mapsto\operatorname{E}\left[\exp\left(i\operatorname{tr}\left(X{\Theta}\right)\right)\right]=\left|1-2i{\Theta}{V}\right|-n/2
where denotes expectation. (Here is any matrix with the same dimensions as, indicates the identity matrix, and is a square root of ).[10] Properly interpreting this formula requires a little care, because noninteger complex powers are multivalued; when is noninteger, the correct branch must be determined via analytic continuation.[11]
If a random matrix has a Wishart distribution with degrees of freedom and variance matrix — write
X\siml{W}p({V},m)
CX{C}T\sim
T,m\right). | |
l{W} | |
q\left({C}{V}{C} |
If is a nonzero constant vector, then:[12]
-2 | |
\sigma | |
z |
{z}TX{z}\sim
2. | |
\chi | |
m |
In this case,
2 | |
\chi | |
m |
2={z} | |
\sigma | |
z |
T{V}{z}
2 | |
\sigma | |
z |
Consider the case where (that is, the -th element is one and all others zero). Then corollary 1 above shows that
-1 | |
\sigma | |
jj |
wjj\sim
2 | |
\chi | |
m |
gives the marginal distribution of each of the elements on the matrix's diagonal.
George Seber points out that the Wishart distribution is not called the “multivariate chi-squared distribution” because the marginal distribution of the off-diagonal elements is not chi-squared. Seber prefers to reserve the term multivariate for the case when all univariate marginals belong to the same family.[13]
The Wishart distribution is the sampling distribution of the maximum-likelihood estimator (MLE) of the covariance matrix of a multivariate normal distribution.[14] A derivation of the MLE uses the spectral theorem.
The Bartlett decomposition of a matrix from a -variate Wishart distribution with scale matrix and degrees of freedom is the factorization:
X={bfL}{bfA}{bfA}T{bfL}T,
where is the Cholesky factor of, and:
A=\begin{pmatrix} c1&0&0& … &0\\ n21&c2&0& … &0\\ n31&n32&c3& … &0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ np1&np2&np3& … &cp \end{pmatrix}
where
2 | |
c | |
i |
\sim
2 | |
\chi | |
n-i+1 |
Let be a variance matrix characterized by correlation coefficient and its lower Cholesky factor:
V=
2 | |
\begin{pmatrix} \sigma | |
1 |
&\rho\sigma1\sigma2\\ \rho\sigma1\sigma2&
2 \end{pmatrix}, L | |
\sigma | |
2 |
=\begin{pmatrix} \sigma1&0\\ \rho\sigma2&\sqrt{1-\rho2}\sigma2 \end{pmatrix}
Multiplying through the Bartlett decomposition above, we find that a random sample from the Wishart distribution is
X=
2 | |
\begin{pmatrix} \sigma | |
1 |
2 | |
c | |
1 |
&\sigma1\sigma2\left(\rho
2 | |
c | |
1 |
+\sqrt{1-\rho2}c1n21\right)\\ \sigma1\sigma2\left(\rho
2 | |
c | |
1 |
+\sqrt{1-\rho2}c1n21\right)&
2 | |
\sigma | |
2 |
\left(\left(1-\rho2\right)
2 | |
c | |
2 |
+\left(\sqrt{1-\rho2}n21+\rhoc1\right)2\right) \end{pmatrix}
The diagonal elements, most evidently in the first element, follow the distribution with degrees of freedom (scaled by) as expected. The off-diagonal element is less familiar but can be identified as a normal variance-mean mixture where the mixing density is a distribution. The corresponding marginal probability density for the off-diagonal element is therefore the variance-gamma distribution
f(x12)=
| ||||||||||
|
where is the modified Bessel function of the second kind.[17] Similar results may be found for higher dimensions. In general, if
X
\Sigma,n
i ≠ j
Xij\simVG(n,\Sigmaij,(\Sigmaii\Sigmajj-
2) | |
\Sigma | |
ij |
1/2,0)
It is also possible to write down the moment-generating function even in the noncentral case (essentially the nth power of Craig (1936)[19] equation 10) although the probability density becomes an infinite sum of Bessel functions.
It can be shown [20] that the Wishart distribution can be defined if and only if the shape parameter belongs to the set
Λp:=\{0,\ldots,p-1\}\cup\left(p-1,infty\right).
This set is named after Gindikin, who introduced it[21] in the 1970s in the context of gamma distributions on homogeneous cones. However, for the new parameters in the discrete spectrum of the Gindikin ensemble, namely,
*:=\{0, | |
Λ | |
p |
\ldots,p-1\},
the corresponding Wishart distribution has no Lebesgue density.
-1 | |
W | |
p |
C\sim
-1 | |
W | |
p |
(V-1,n)