In mathematics, the multivariate gamma function Γp is a generalization of the gamma function. It is useful in multivariate statistics, appearing in the probability density function of the Wishart and inverse Wishart distributions, and the matrix variate beta distribution.[1]
It has two equivalent definitions. One is given as the following integral over the
p x p
\Gammap(a)= \intS>0\exp\left(-{\rm
| ||||
tr}(S)\right) \left|S\right| |
dS,
|S|
S
p(p-1)/4 | |
\Gamma | |
p(a)= \pi |
p \Gamma(a+(1-j)/2). | |
\prod | |
j=1 |
a
\Re(a)>(p-1)/2
\Gamma1(a)
p\ge2
\Gammap(a)=\pi(p-1)/2\Gamma(a)\Gammap-1(a-\tfrac{1}{2})=\pi(p-1)/2\Gammap-1(a)\Gamma(a+(1-p)/2).
Thus
1/2 | |
\Gamma | |
2(a)=\pi |
\Gamma(a)\Gamma(a-1/2)
3/2 | |
\Gamma | |
3(a)=\pi |
\Gamma(a)\Gamma(a-1/2)\Gamma(a-1)
and so on.
This can also be extended to non-integer values of
p
p(p-1)/4 | |
\Gamma | |
p(a)=\pi |
| ||||||
|
Where G is the Barnes G-function, the indefinite product of the Gamma function.
The function is derived by Anderson[2] from first principles who also cites earlier work by Wishart, Mahalanobis and others.
There also exists a version of the multivariate gamma function which instead of a single complex number takes a
p
We may define the multivariate digamma function as
\psip(a)=
\partiallog\Gammap(a) | |
\partiala |
=
p | |
\sum | |
i=1 |
\psi(a+(1-i)/2),
(n) | |
\psi | |
p |
(a)=
\partialnlog\Gammap(a) | |
\partialan |
=
p | |
\sum | |
i=1 |
\psi(n)(a+(1-i)/2).
\Gammap(a)=\pip(p-1)/4
p | ||
\prod | \Gamma\left(a+ | |
j=1 |
1-j | |
2 |
\right),
it follows that
\partial\Gammap(a) | |
\partiala |
=\pip(p-1)/4
p | |
\sum | |
i=1 |
| |||||
\partiala |
| ||||
\prod | ||||
j=1,j ≠ i |
\right).
\partial\Gamma(a+(1-i)/2) | |
\partiala |
=\psi(a+(i-1)/2)\Gamma(a+(i-1)/2)
it follows that
\begin{align} | \partial\Gammap(a) |
\partiala |
&=\pip(p-1)/4
p | |
\prod | |
j=1 |
\Gamma(a+(1-j)/2)
p | |
\sum | |
i=1 |
\psi(a+(1-i)/2)\\[4pt] &=\Gammap(a)\sum
p | |
i=1 |
\psi(a+(1-i)/2). \end{align}