Multivariate gamma function explained

In mathematics, the multivariate gamma function Γp is a generalization of the gamma function. It is useful in multivariate statistics, appearing in the probability density function of the Wishart and inverse Wishart distributions, and the matrix variate beta distribution.[1]

It has two equivalent definitions. One is given as the following integral over the

p x p

positive-definite real matrices:

\Gammap(a)= \intS>0\exp\left(-{\rm

a-p+1
2
tr}(S)\right) \left|S\right|

dS,

where

|S|

denotes the determinant of

S

. The other one, more useful to obtain a numerical result is:
p(p-1)/4
\Gamma
p(a)= \pi
p \Gamma(a+(1-j)/2).
\prod
j=1
In both definitions,

a

is a complex number whose real part satisfies

\Re(a)>(p-1)/2

. Note that

\Gamma1(a)

reduces to the ordinary gamma function. The second of the above definitions allows to directly obtain the recursive relationships for

p\ge2

:

\Gammap(a)=\pi(p-1)/2\Gamma(a)\Gammap-1(a-\tfrac{1}{2})=\pi(p-1)/2\Gammap-1(a)\Gamma(a+(1-p)/2).

Thus

1/2
\Gamma
2(a)=\pi

\Gamma(a)\Gamma(a-1/2)

3/2
\Gamma
3(a)=\pi

\Gamma(a)\Gamma(a-1/2)\Gamma(a-1)

and so on.

This can also be extended to non-integer values of

p

with the expression:
p(p-1)/4
\Gamma
p(a)=\pi
G(a+1
2)G(a+1)
G(a+1-p
2)G(a+1-p
2)

Where G is the Barnes G-function, the indefinite product of the Gamma function.

The function is derived by Anderson[2] from first principles who also cites earlier work by Wishart, Mahalanobis and others.

There also exists a version of the multivariate gamma function which instead of a single complex number takes a

p

-dimensional vector of complex numbers as its argument. It generalizes the above defined multivariate gamma function insofar as the latter is obtained by a particular choice of multivariate argument of the former.[3]

Derivatives

We may define the multivariate digamma function as

\psip(a)=

\partiallog\Gammap(a)
\partiala

=

p
\sum
i=1

\psi(a+(1-i)/2),

and the general polygamma function as
(n)
\psi
p

(a)=

\partialnlog\Gammap(a)
\partialan

=

p
\sum
i=1

\psi(n)(a+(1-i)/2).

Calculation steps

\Gammap(a)=\pip(p-1)/4

p
\prod\Gamma\left(a+
j=1
1-j
2

\right),

it follows that

\partial\Gammap(a)
\partiala

=\pip(p-1)/4

p
\sum
i=1
\partial\Gamma\left(a+1-i\right)
2
\partiala
p\Gamma\left(a+1-j
2
\prod
j=1,ji

\right).

\partial\Gamma(a+(1-i)/2)
\partiala

=\psi(a+(i-1)/2)\Gamma(a+(i-1)/2)

it follows that

\begin{align} \partial\Gammap(a)
\partiala

&=\pip(p-1)/4

p
\prod
j=1

\Gamma(a+(1-j)/2)

p
\sum
i=1

\psi(a+(1-i)/2)\\[4pt] &=\Gammap(a)\sum

p
i=1

\psi(a+(1-i)/2). \end{align}

References

Notes and References

  1. James. Alan T.. June 1964. Distributions of Matrix Variates and Latent Roots Derived from Normal Samples. The Annals of Mathematical Statistics. en. 35. 2. 475–501. 10.1214/aoms/1177703550. 0003-4851. free.
  2. Book: Anderson, T W. An Introduction to Multivariate Statistical Analysis. John Wiley and Sons. 1984. 0-471-88987-3. New York. Ch. 7.
  3. Web site: Chapter 35 Functions of Matrix Argument. Digital Library of Mathematical Functions. D. St. P. Richards. n.d.. 23 May 2022.