Inverse-Wishart distribution explained
In statistics, the inverse Wishart distribution, also called the inverted Wishart distribution, is a probability distribution defined on real-valued positive-definite matrices. In Bayesian statistics it is used as the conjugate prior for the covariance matrix of a multivariate normal distribution.
We say
follows an inverse Wishart distribution, denoted as
, if its
inverse
has a
Wishart distribution
. Important identities have been derived for the inverse-Wishart distribution.
[1] Density
The probability density function of the inverse Wishart is:[2]
fX({X};{\Psi},\nu)=
\nu/2
} \left|\mathbf\right|^ e^
where
and
are
positive definite matrices,
is the determinant, and Γ
p(·) is the
multivariate gamma function.
Theorems
Distribution of the inverse of a Wishart-distributed matrix
If
{X}\siml{W}({\Sigma},\nu)
and
is of size
, then
has an inverse Wishart distribution
A\siml{W}-1({\Sigma}-1,\nu)
.
[3] Marginal and conditional distributions from an inverse Wishart-distributed matrix
Suppose
{A}\siml{W}-1({\Psi},\nu)
has an inverse Wishart distribution. Partition the matrices
and
conformably with each other
} = \begin \mathbf_ & \mathbf_ \\ \mathbf_ & \mathbf_ \end, \; = \begin \mathbf_ & \mathbf_ \\ \mathbf_ & \mathbf_ \end where
} and
} are
matrices, then we have
is independent of
and
, where
} = _ - __^_ is the
Schur complement of
in
;
{A11}\siml{W}-1({\Psi11},\nu-p2)
;
{A}12\mid{A}22 ⋅ \sim
(
{\Psi}12,{A}22 ⋅ ⊗
)
, where
is a
matrix normal distribution;
{A}22 ⋅ \siml{W}-1({\Psi}22 ⋅ ,\nu)
, where
} = _ - __^_;
Conjugate distribution
Suppose we wish to make inference about a covariance matrix
} whose
prior
has a
distribution. If the observations
are independent p-variate Gaussian variables drawn from a
distribution, then the conditional distribution
has a
distribution, where
}=\mathbf\mathbf^T.
Because the prior and posterior distributions are the same family, we say the inverse Wishart distribution is conjugate to the multivariate Gaussian.
Due to its conjugacy to the multivariate Gaussian, it is possible to marginalize out (integrate out) the Gaussian's parameter
, using the formula
p(x)=
| p(x|\Sigma)p(\Sigma) |
p(\Sigma|x) |
and the linear algebra identity
:
fX\mid\Psi,\nu(x)=\intfX\mid\Sigma=\sigma(x)f\Sigma\mid\Psi,\nu(\sigma)d\sigma=
| |
| np/2 | | \pi | | |\Psi+A|(\nu+n)/2) |
|
(this is useful because the variance matrix
is not known in practice, but because
is known
a priori, and
can be obtained from the data, the right hand side can be evaluated directly). The inverse-Wishart distribution as a prior can be constructed via existing transferred
prior knowledge.
[4] Moments
The following is based on Press, S. J. (1982) "Applied Multivariate Analysis", 2nd ed. (Dover Publications, New York), after reparameterizing the degree of freedom to be consistent with the p.d.f. definition above.
Let
with
and
, so that
.
The mean:[3]
The variance of each element of
:
\operatorname{Var}(xij)=
| | 2 | | (\nu-p+1)\psi | | +(\nu-p-1)\psiii\psijj | | ij | |
|
(\nu-p)(\nu-p-1)2(\nu-p-3) |
The variance of the diagonal uses the same formula as above with
, which simplifies to:
\operatorname{Var}(xii)=
.
The covariance of elements of
are given by:
\operatorname{Cov}(xij,xk\ell)=
| 2\psiij\psik\ell+(\nu-p-1)(\psiik\psij\ell+\psii\ell\psikj) |
(\nu-p)(\nu-p-1)2(\nu-p-3) |
The same results are expressed in Kronecker product form by von Rosen[5] as follows:
\begin{align}
E\left(W-1 ⊗ W-1\right)&=c1\Psi ⊗ \Psi+c2Vec(\Psi)Vec(\Psi)T+c2Kpp\Psi ⊗ \Psi\\
Cov ⊗ \left(W-1,W-1\right)&=(c1-c3)\Psi ⊗ \Psi+c2Vec(\Psi)Vec(\Psi)T+c2Kpp\Psi ⊗ \Psi
\end{align}
where
\begin{align}
c2&=\left[(\nu-p)(\nu-p-1)(\nu-p-3)\right]-1\\
c1&=(\nu-p-2)c2\\
c3&=(\nu-p-1)-2,
\end{align}
commutation matrixCov ⊗ \left(W-1,W-1\right)=E\left(W-1 ⊗ W-1\right)-E\left(W-1\right) ⊗ E\left(W-1\right).
There appears to be a typo in the paper whereby the coefficient of
is given as
rather than
, and that the expression for the mean square inverse Wishart, corollary 3.1, should read
E\left[W-1W-1\right]=(c1+c2)\Sigma-1\Sigma-1+c2\Sigma-1tr(\Sigma-1).
To show how the interacting terms become sparse when the covariance is diagonal, let
and introduce some arbitrary parameters
:
E\left(W-1 ⊗ W-1\right)=u\Psi ⊗ \Psi+vvec(\Psi)vec(\Psi)T+wKpp\Psi ⊗ \Psi.
where
denotes the matrix
vectorization operator. Then the second moment matrix becomes
E\left(W-1 ⊗ W-1\right)=\begin{bmatrix}
u+v+w& ⋅ & ⋅ & ⋅ &v& ⋅ & ⋅ & ⋅ &v\\
⋅ &u& ⋅ &w& ⋅ & ⋅ & ⋅ & ⋅ & ⋅ \\
⋅ & ⋅ &u& ⋅ & ⋅ & ⋅ &w& ⋅ & ⋅ \\
⋅ &w& ⋅ &u& ⋅ & ⋅ & ⋅ & ⋅ & ⋅ \\
v& ⋅ & ⋅ & ⋅ &u+v+w& ⋅ & ⋅ & ⋅ &v\\
⋅ & ⋅ & ⋅ & ⋅ & ⋅ &u& ⋅ &w& ⋅ \\
⋅ & ⋅ &w& ⋅ & ⋅ & ⋅ &u& ⋅ & ⋅ \\
⋅ & ⋅ & ⋅ & ⋅ & ⋅ &w& ⋅ &u& ⋅ \\
v& ⋅ & ⋅ & ⋅ &v& ⋅ & ⋅ & ⋅ &u+v+w\\
\end{bmatrix}
which is non-zero only when involving the correlations of diagonal elements of
, all other elements are mutually uncorrelated, though not necessarily statistically independent. The variances of the Wishart product are also obtained by Cook et al.
[6] in the singular case and, by extension, to the full rank case.
Muirhead[7] shows in Theorem 3.2.8 that if
is distributed as
and
is an arbitrary vector, independent of
then
VTAV\siml{W}1(\nu,AT\SigmaA)
and
, one degree of freedom being relinquished by estimation of the sample mean in the latter. Similarly, Bodnar et.al. further find that
and setting
the marginal distribution of the leading diagonal element is thus
\sim
x-k/2-1e-1/(2, k=\nu-p+1
and by rotating
end-around a similar result applies to all diagonal elements
.
A corresponding result in the complex Wishart case was shown by Brennan and Reed[8] and the uncorrelated inverse complex Wishart
was shown by Shaman
[9] to have diagonal statistical structure in which the leading diagonal elements are correlated, while all other element are uncorrelated.
Related distributions
(i.e. univariate) and
,
and
the
probability density function of the inverse-Wishart distribution becomes matrix
p(x\mid\alpha,\beta)=
| \beta\alphax-\alpha-1\exp(-\beta/x) |
\Gamma1(\alpha) |
.
i.e., the inverse-gamma distribution, where
is the ordinary
Gamma function.
and the scale parameter
.
- Another generalization has been termed the generalized inverse Wishart distribution,
. A
positive definite matrix
is said to be distributed as
if
is distributed as
. Here
denotes the symmetric matrix square root of
, the parameters
are
positive definite matrices, and the parameter
is a positive scalar larger than
. Note that when
is equal to an identity matrix,
l{GW}-1(\Psi,\nu,S)=l{W}-1(\Psi,\nu)
. This generalized inverse Wishart distribution has been applied to estimating the distributions of multivariate autoregressive processes.
[10]
is an arbitrary orthogonal matrix, replacement of
by
does not change the pdf of
so
belongs to the family of spherically invariant random processes (SIRPs) in some sense.
Thus, an arbitrary p-vector
with
length
can be rotated into the vector
without changing the pdf of
, moreover
can be a permutation matrix which exchanges diagonal elements. It follows that the diagonal elements of
are identically inverse chi squared distributed, with pdf
in the previous section though they are not mutually independent. The result is known in optimal portfolio statistics, as in Theorem 2 Corollary 1 of Bodnar et al,
[11] where it is expressed in the inverse form
.
- As is the case with the Wishart distribution linear transformations of the distribution yield a modified inverse Wishart distribution. If
and
are full rank matrices then
[12] \ThetaX{\Theta}T\sim
| T, |
l{W} | |
| p\left({\Theta}{\Psi}{\Theta} |
\nu\right).
and
is
of full rank
then
\ThetaX{\Theta}T\sim
\left({\Theta}{\Psi}{\Theta}T,\nu\right).
See also
Notes and References
- Haff. LR. An identity for the Wishart distribution with applications. Journal of Multivariate Analysis. 1979. 9. 4. 531–544. 10.1016/0047-259x(79)90056-3.
- Book: Bayesian Data Analysis, Third Edition. Gelman. Andrew. Carlin. John B.. Stern. Hal S.. Dunson. David B.. Vehtari. Aki. Rubin. Donald B.. 2013-11-01. Chapman and Hall/CRC. 9781439840955. 3rd. Boca Raton. en.
- Book: Kanti V. Mardia, J. T. Kent and J. M. Bibby . Multivariate Analysis . . 1979 . 978-0-12-471250-8.
- Shahrokh Esfahani. Mohammad. Dougherty. Edward. Incorporation of Biological Pathway Knowledge in the Construction of Priors for Optimal Bayesian Classification. IEEE Transactions on Bioinformatics and Computational Biology. 2014. 11. 1. 202–218. 10.1109/tcbb.2013.143. 26355519. 10096507 .
- Rosen. Dietrich von. 1988. Moments for the Inverted Wishart Distribution. Scand. J. Stat.. 15. 97–109. JSTOR.
- Cook. R D. Forzani. Liliana. Liliana Forzani . Brian . Cook . August 2019. On the mean and variance of the generalized inverse of a singular Wishart matrix. Electronic Journal of Statistics. 5. 10.4324/9780429344633 . 9780429344633 . 146200569 .
- Book: Muirhead, Robb . Aspects of Multivariate Statistical Theory . Wiley . 1982 . 0-471-76985-1 . USA . 93 . English.
- Brennan. L E. Reed. I S. January 1982. An Adaptive Array Signal Processing Algorithm for Communications. IEEE Transactions on Aerospace and Electronic Systems. 18 . 1. 120–130. 10.1109/TAES.1982.309212 . 1982ITAES..18..124B . 45721922 .
- Shaman . Paul . 1980 . The Inverted Complex Wishart Distribution and Its Application to Spectral Estimation . Journal of Multivariate Analysis . 10 . 51–59 . 10.1016/0047-259X(80)90081-0.
- Triantafyllopoulos . K. . Real-time covariance estimation for the local level model . 10.1111/j.1467-9892.2010.00686.x . Journal of Time Series Analysis . 32 . 2 . 93–107 . 2011 . 1311.0634 . 88512953 .
- Bodnar. T.. Mazur. S.. Podgórski. K.. January 2015. Singular Inverse Wishart Distribution with Application to Portfolio Theory. Department of Statistics, Lund University. (Working Papers in Statistics; Nr. 2). 1–17.
- Bodnar . T . Mazur . S . Podgorski . K . 2015 . Singular Inverse Wishart Distribution with Application to Portfolio Theory . Journal of Multivariate Analysis . 143 . 314–326.