Matrix variate Dirichlet distribution explained

In statistics, the matrix variate Dirichlet distribution is a generalization of the matrix variate beta distribution and of the Dirichlet distribution.

Suppose

U1,\ldots,Ur

are

p x p

positive definite matrices with

Ip-\sum

rU
i
also positive-definite, where

Ip

is the

p x p

identity matrix. Then we say that the

Ui

have a matrix variate Dirichlet distribution,

\left(U1,\ldots,Ur\right)\simDp\left(a1,\ldots,ar;ar+1\right)

, if their joint probability density function is

\left\{\betap\left(a1,\ldots,ar,ar+1\right)\right\}-1

r
\prod
i=1
ai-(p+1)/2
\det\left(U
i\right)

\det\left(Ip-\sum

ar+1-(p+1)/2
i\right)

where

ai>(p-1)/2,i=1,\ldots,r+1

and

\betap\left(\right)

is the multivariate beta function.

If we write

Ur+1=Ip-\sum

r
i=1

Ui

then the PDF takes the simpler form

\left\{\betap\left(a1,\ldots,ar+1\right)\right\}-1

r+1
\prod
i=1
ai-(p+1)/2
\det\left(U
i\right)

,

on the understanding that

r+1
\sum
i=1

Ui=Ip

.

Theorems

generalization of chi square-Dirichlet result

Suppose

Si\simWp\left(ni,\Sigma\right),i=1,\ldots,r+1

are independently distributed Wishart

p x p

positive definite matrices. Then, defining
-1/2
U
i=S
-1/2
S
i\left(S

\right)T

(where S=\sum_^S_i is the sum of the matrices and

S1/2\left(S-1/2\right)T

is any reasonable factorization of

S

), we have

\left(U1,\ldots,Ur\right)\simDp\left(n1/2,...,nr+1/2\right).

Marginal distribution

If

\left(U1,\ldots,Ur\right)\simDp\left(a1,\ldots,ar+1\right)

, and if

s\leqr

, then:

\left(U1,\ldots,Us\right)\simDp\left(a1,\ldots,as,\sum

r+1
i=s+1

ai\right)

Conditional distribution

Also, with the same notation as above, the density of

\left(Us+1,\ldots,Ur\right)\left|\left(U1,\ldots,Us\right)\right.

is given by
r+1
\prod
ai-(p+1)/2
\det\left(U
i\right)
i=s+1
\betas+1,\ldots,ar+1\right)\det\left(Ip-\sum
s
i=1
r+1
\sumai-(p+1)/2
i=s+1
U
i\right)
p\left(a

where we write

Ur+1=Ip-\sum

rU
i
.

partitioned distribution

Suppose

\left(U1,\ldots,Ur\right)\simDp\left(a1,\ldots,ar+1\right)

and suppose that

S1,\ldots,St

is a partition of

\left[r+1\right]=\left\{1,\ldotsr+1\right\}

(that is,
tS
\cup
i=\left[r+1\right]
and

Si\capSj=\emptyset

if

ij

). Then, writing

U(j)

=\sum
i\inSj

Ui

and

a(j)

=\sum
i\inSj

ai

(with

Ur+1=Ip-\sum

r
i=1

Ur

), we have:

\left(U(1),\ldotsU(t)\right)\simDp\left(a(1),\ldots,a(t)\right).

partitions

Suppose

\left(U1,\ldots,Ur\right)\simDp\left(a1,\ldots,ar+1\right)

. Define

Ui=\left(\begin{array}{rr} U11(i)&U12(i)\\ U21(i)&U22(i)\end{array}\right)    i=1,\ldots,r

where

U11(i)

is

p1 x p1

and

U22(i)

is

p2 x p2

. Writing the Schur complement

U22 ⋅ =U21(i)

-1
U
11(i)

U12(i)

we have

\left(U11(1),\ldots,U11(r)\right)\sim

D
p1

\left(a1,\ldots,ar+1\right)

and

\left(U22.1(1),\ldots,U22.1(r)\right)\sim

D
p2

\left(a1-p1/2,\ldots,ar-p1/2,ar+1-p1/2+p1r/2\right).

See also

References

A. K. Gupta and D. K. Nagar 1999. "Matrix variate distributions". Chapman and Hall.