In probability and statistics, the Dirichlet distribution (after Peter Gustav Lejeune Dirichlet), often denoted
\operatorname{Dir}(\boldsymbol\alpha)
\boldsymbol\alpha
The infinite-dimensional generalization of the Dirichlet distribution is the Dirichlet process.
The Dirichlet distribution of order K ≥ 2 with parameters α1, ..., αK > 0 has a probability density function with respect to Lebesgue measure on the Euclidean space RK-1 given by
f\left(x1,\ldots,xK;\alpha1,\ldots,\alphaK\right)=
1 | |
B(\boldsymbol\alpha) |
K | |
\prod | |
i=1 |
\alphai-1 | |
x | |
i |
where
\{xk\}
k=K | |
k=1 |
K-1
K | |
\sum | |
i=1 |
xi=1andxi\in\left[0,1\right]foralli\in\{1,...,K\}
The normalizing constant is the multivariate beta function, which can be expressed in terms of the gamma function:
B(\boldsymbol\alpha)=
| ||||||||||
|
, \boldsymbol{\alpha}=(\alpha1,\ldots,\alphaK).
The support of the Dirichlet distribution is the set of K-dimensional vectors
\boldsymbolx
\|\boldsymbolx\|1=1
A common special case is the symmetric Dirichlet distribution, where all of the elements making up the parameter vector
\boldsymbol\alpha
f(x1,...,xK;\alpha)=
\Gamma(\alphaK) | |
\Gamma(\alpha)K |
K | |
\prod | |
i=1 |
\alpha-1 | |
x | |
i |
.
When α=1, the symmetric Dirichlet distribution is equivalent to a uniform distribution over the open standard (K − 1)-simplex, i.e. it is uniform over all points in its support. This particular distribution is known as the flat Dirichlet distribution. Values of the concentration parameter above 1 prefer variates that are dense, evenly distributed distributions, i.e. all the values within a single sample are similar to each other. Values of the concentration parameter below 1 prefer sparse distributions, i.e. most of the values within a single sample will be close to 0, and the vast majority of the mass will be concentrated in a few of the values.
More generally, the parameter vector is sometimes written as the product
\alpha\boldsymboln
\boldsymboln=(n1,...,nK)
\boldsymboln
ni
If we define the concentration parameter as the sum of the Dirichlet parameters for each dimension, the Dirichlet distribution with concentration parameter K, the dimension of the distribution, is the uniform distribution on the (K − 1)-simplex.
Let
X=(X1,\ldots,XK)\sim\operatorname{Dir}(\boldsymbol\alpha)
Let
\alpha0=
K | |
\sum | |
i=1 |
\alphai.
\operatorname{E}[Xi]=
\alphai | |
\alpha0 |
,
\operatorname{Var}[Xi]=
\alphai(\alpha0-\alphai) | |||||||||
|
.
Furthermore, if
i ≠ j
\operatorname{Cov}[Xi,Xj]=
-\alphai\alphaj | |||||||||
|
.
The matrix is thus singular.
More generally, moments of Dirichlet-distributed random variables can be expressed in the following way. For
\boldsymbol{t}=(t1,...c,tK)\inRK
\boldsymbol{t}\circ=
i) | |
(t | |
K |
i
\operatorname{E}\left[(\boldsymbol{t} ⋅ \boldsymbol{X})n\right]=
n!\Gamma(\alpha0) | |
\Gamma(\alpha0+n) |
\sum
{t1 | ||||
|
…
kK | |
{t | |
K} |
where the sum is over non-negative integers
k1,\ldots,kK
n=k1+ … +kK
Zn
n
The multivariate analogue for vectors
\boldsymbol{t}1,...c,\boldsymbol{t}q\inRK
n1,...c,nq
Particular cases include the simple computation[8]
K | |
\operatorname{E}\left[\prod | |
i=1 |
\betai | |
X | |
i |
\right]=
B\left(\boldsymbol{\alpha | |
+ |
\boldsymbol{\beta}\right)}{B\left(\boldsymbol{\alpha}\right)}=
| ||||||||||
|
K | |
x \prod | |
i=1 |
\Gamma(\alphai+\betai) | |
\Gamma(\alphai) |
.
The mode of the distribution is[9] the vector (x1, ..., xK) with
xi=
\alphai-1 | |
\alpha0-K |
, \alphai>1.
The marginal distributions are beta distributions:[10]
Xi\sim\operatorname{Beta}(\alphai,\alpha0-\alphai).
The Dirichlet distribution is the conjugate prior distribution of the categorical distribution (a generic discrete probability distribution with a given number of possible outcomes) and multinomial distribution (the distribution over observed counts of each possible category in a set of categorically distributed observations). This means that if a data point has either a categorical or multinomial distribution, and the prior distribution of the distribution's parameter (the vector of probabilities that generates the data point) is distributed as a Dirichlet, then the posterior distribution of the parameter is also a Dirichlet. Intuitively, in such a case, starting from what we know about the parameter prior to observing the data point, we then can update our knowledge based on the data point and end up with a new distribution of the same form as the old one. This means that we can successively update our knowledge of a parameter by incorporating new observations one at a time, without running into mathematical difficulties.
Formally, this can be expressed as follows. Given a model
\begin{array}{rcccl} \boldsymbol\alpha&=&\left(\alpha1,\ldots,\alphaK\right)&=&concentrationhyperparameter\\ p\mid\boldsymbol\alpha&=&\left(p1,\ldots,pK\right)&\sim&\operatorname{Dir}(K,\boldsymbol\alpha)\\ X\midp&=&\left(x1,\ldots,xK\right)&\sim&\operatorname{Cat}(K,p) \end{array}
then the following holds:
\begin{array}{rcccl} c&=&\left(c1,\ldots,cK\right)&=&numberofoccurrencesofcategoryi\\ p\midX,\boldsymbol\alpha&\sim&\operatorname{Dir}(K,c+\boldsymbol\alpha)&=&\operatorname{Dir}\left(K,c1+\alpha1,\ldots,cK+\alphaK\right) \end{array}
This relationship is used in Bayesian statistics to estimate the underlying parameter p of a categorical distribution given a collection of N samples. Intuitively, we can view the hyperprior vector α as pseudocounts, i.e. as representing the number of observations in each category that we have already seen. Then we simply add in the counts for all the new observations (the vector c) in order to derive the posterior distribution.
In Bayesian mixture models and other hierarchical Bayesian models with mixture components, Dirichlet distributions are commonly used as the prior distributions for the categorical variables appearing in the models. See the section on applications below for more information.
In a model where a Dirichlet prior distribution is placed over a set of categorical-valued observations, the marginal joint distribution of the observations (i.e. the joint distribution of the observations, with the prior parameter marginalized out) is a Dirichlet-multinomial distribution. This distribution plays an important role in hierarchical Bayesian models, because when doing inference over such models using methods such as Gibbs sampling or variational Bayes, Dirichlet prior distributions are often marginalized out. See the article on this distribution for more details.
If X is a
\operatorname{Dir}(\boldsymbol\alpha)
h(\boldsymbolX)=\operatorname{E}[-lnf(\boldsymbolX)]=ln\operatorname{B}(\boldsymbol\alpha)+(\alpha0-K)\psi(\alpha0)-
K | |
\sum | |
j=1 |
(\alphaj-1)\psi(\alphaj)
where
\psi
The following formula for
\operatorname{E}[ln(Xi)]
ln(Xi)
ln(Xi)
\operatorname{E}[ln(Xi)]=\psi(\alphai)-\psi(\alpha0)
and
\operatorname{Cov}[ln(Xi),ln(Xj)]=\psi'(\alphai)\deltaij-\psi'(\alpha0)
where
\psi
\psi'
\deltaij
The spectrum of Rényi information for values other than
λ=1
FR(λ)=(1-λ)-1\left(-λlogB(\boldsymbol\alpha)+
K | |
\sum | |
i=1 |
log\Gamma(λ(\alphai-1)+1)-log\Gamma(λ(\alpha0-K)+K)\right)
and the information entropy is the limit as
λ
Another related interesting measure is the entropy of a discrete categorical (one-of-K binary) vector
\boldsymbolZ
\boldsymbolX
P(Zi=1,Zj\ne=0|\boldsymbolX)=Xi
\boldsymbolZ
\boldsymbolX
S(\boldsymbolX)=H(\boldsymbolZ|\boldsymbolX)=\operatorname{E}\boldsymbol[-logP(\boldsymbolZ|\boldsymbolX)]=
K | |
\sum | |
i=1 |
-XilogXi
This function of
\boldsymbolX
\boldsymbolX
\alphai=\alpha
\operatorname{E}[S(\boldsymbolX)]=
K | |
\sum | |
i=1 |
\operatorname{E}[-XilnXi]=\psi(K\alpha+1)-\psi(\alpha+1)
If
X=(X1,\ldots,XK)\sim\operatorname{Dir}(\alpha1,\ldots,\alphaK)
then, if the random variables with subscripts i and j are dropped from the vector and replaced by their sum,
X'=(X1,\ldots,Xi+Xj,\ldots,XK)\sim\operatorname{Dir}(\alpha1,\ldots,\alphai+\alphaj,\ldots,\alphaK).
This aggregation property may be used to derive the marginal distribution of
Xi
See main article: Neutral vector.
If
X=(X1,\ldots,XK)\sim\operatorname{Dir}(\boldsymbol\alpha)
X(-K)
X(-K)=\left(
X1 | , | |
1-XK |
X2 | ,\ldots, | |
1-XK |
XK-1 | |
1-XK |
\right),
and similarly for removing any of
X2,\ldots,XK-1
Combining this with the property of aggregation it follows that Xj + ... + XK is independent of
\left( | X1 | , |
X1+ … +Xj-1 |
X2 | ,\ldots, | |
X1+ … +Xj-1 |
Xj-1 | |
X1+ … +Xj-1 |
\right)
3\lej\leK-1
\left(X1+ … +Xj-1,Xj+ … +XK\right)
\left( | X1 | , |
X1+ … +Xj-1 |
X2 | ,\ldots, | |
X1+ … +Xj-1 |
Xj-1 | |
X1+ … +Xj-1 |
\right)
\left( | Xj | , |
Xj+ … +XK |
Xj+1 | ,\ldots, | |
Xj+ … +XK |
XK | |
Xj+ … +XK |
\right)
The characteristic function of the Dirichlet distribution is a confluent form of the Lauricella hypergeometric series. It is given by Phillips as[17]
CF\left(s1,\ldots,sK-1\right)=
i\left(s1X1+ … +sK-1XK-1\right) | |
\operatorname{E}\left(e |
\right)=\Psi\left[K-1\right](\alpha1,\ldots,\alphaK-1;\alpha0;is1,\ldots,isK-1)
where
\Psi[m](a1,\ldots,am;c;z1,\ldotszm)=\sum
| |||||||||||||||||||||||||||
(c)kk1! … km! |
.
The sum is over non-negative integers
k1,\ldots,km
k=k1+ … +km
\Psi[m]=
\Gamma(c) | |
2\pii |
\intLett
a1+ … +am-c | |
m | |
\prod | |
j=1 |
-aj | |
(t-z | |
j) |
dt
where L denotes any path in the complex plane originating at
-infty
-infty
Probability density function
f\left(x1,\ldots,xK-1;\alpha1,\ldots,\alphaK\right)
For K independently distributed Gamma distributions:
Y1\sim\operatorname{Gamma}(\alpha1,\theta),\ldots,YK\sim\operatorname{Gamma}(\alphaK,\theta)
we have:[19]
K | |
V=\sum | |
i=1 |
Yi\sim\operatorname{Gamma}\left(\alpha0,\theta\right),
X=(X1,\ldots,XK)=\left(
Y1 | |
V |
,\ldots,
YK | |
V |
\right)\sim\operatorname{Dir}\left(\alpha1,\ldots,\alphaK\right).
Although the Xis are not independent from one another, they can be seen to be generated from a set of K independent gamma random variable.[19] Unfortunately, since the sum V is lost in forming X (in fact it can be shown that V is stochastically independent of X), it is not possible to recover the original gamma random variables from these values alone. Nevertheless, because independent random variables are simpler to work with, this reparametrization can still be useful for proofs about properties of the Dirichlet distribution.
Because the Dirichlet distribution is an exponential family distribution it has a conjugate prior.The conjugate prior is of the form:[20]
\operatorname{CD}(\boldsymbol\alpha\mid\boldsymbol{v},η)\propto\left(
1 | |
\operatorname{B |
(\boldsymbol\alpha)}\right)η\exp\left(-\sumkvk\alphak\right).
Here
\boldsymbol{v}
η
(\boldsymbol{v},η)
\forallk vk>0 and η>-1 and (η\leq0 or \sumk\exp-
vk | |
η |
<1)
The conjugation property can be expressed as
if [''prior'': <math>\boldsymbol{\alpha}\sim\operatorname{CD}(\cdot \mid \boldsymbol{v},\eta)</math>] and [''observation'': <math>\boldsymbol{x}\mid\boldsymbol{\alpha}\sim\operatorname{Dirichlet}(\cdot \mid \boldsymbol{\alpha})</math>] then [''posterior'': <math>\boldsymbol{\alpha}\mid\boldsymbol{x}\sim\operatorname{CD}(\cdot \mid \boldsymbol{v}-\log \boldsymbol{x}, \eta+1)</math>].
In the published literature there is no practical algorithm to efficiently generate samples from
\operatorname{CD}(\boldsymbol{\alpha}\mid\boldsymbol{v},η)
Dirichlet distributions are most commonly used as the prior distribution of categorical variables or multinomial variables in Bayesian mixture models and other hierarchical Bayesian models. (In many fields, such as in natural language processing, categorical variables are often imprecisely called "multinomial variables". Such a usage is unlikely to cause confusion, just as when Bernoulli distributions and binomial distributions are commonly conflated.)
Inference over hierarchical Bayesian models is often done using Gibbs sampling, and in such a case, instances of the Dirichlet distribution are typically marginalized out of the model by integrating out the Dirichlet random variable. This causes the various categorical variables drawn from the same Dirichlet random variable to become correlated, and the joint distribution over them assumes a Dirichlet-multinomial distribution, conditioned on the hyperparameters of the Dirichlet distribution (the concentration parameters). One of the reasons for doing this is that Gibbs sampling of the Dirichlet-multinomial distribution is extremely easy; see that article for more information.
Dirichlet distributions are very often used as prior distributions in Bayesian inference. The simplest and perhaps most common type of Dirichlet prior is the symmetric Dirichlet distribution, where all parameters are equal. This corresponds to the case where you have no prior information to favor one component over any other. As described above, the single value α to which all parameters are set is called the concentration parameter. If the sample space of the Dirichlet distribution is interpreted as a discrete probability distribution, then intuitively the concentration parameter can be thought of as determining how "concentrated" the probability mass of the Dirichlet distribution to its center, leading to samples with mass dispersed almost equally among all components, i.e., with a value much less than 1, the mass will be highly concentrated in a few components, and all the rest will have almost no mass, and with a value much greater than 1, the mass will be dispersed almost equally among all the components. See the article on the concentration parameter for further discussion.
One example use of the Dirichlet distribution is if one wanted to cut strings (each of initial length 1.0) into K pieces with different lengths, where each piece had a designated average length, but allowing some variation in the relative sizes of the pieces. Recall that
\alpha0=
K | |
\sum | |
i=1 |
\alphai.
\alphai/\alpha0
\alpha0
Consider an urn containing balls of K different colors. Initially, the urn contains α1 balls of color 1, α2 balls of color 2, and so on. Now perform N draws from the urn, where after each draw, the ball is placed back into the urn with an additional ball of the same color. In the limit as N approaches infinity, the proportions of different colored balls in the urn will be distributed as Dir(α1,...,αK).[22]
For a formal proof, note that the proportions of the different colored balls form a bounded [0,1]K-valued martingale, hence by the martingale convergence theorem, these proportions converge almost surely and in mean to a limiting random vector. To see that this limiting vector has the above Dirichlet distribution, check that all mixed moments agree.
Each draw from the urn modifies the probability of drawing a ball of any one color from the urn in the future. This modification diminishes with the number of draws, since the relative effect of adding a new ball to the urn diminishes as the urn accumulates increasing numbers of balls.
With a source of Gamma-distributed random variates, one can easily sample a random vector
x=(x1,\ldots,xK)
(\alpha1,\ldots,\alphaK)
y1,\ldots,yK
\operatorname{Gamma}(\alphai,1)=
| ||||||||||||||
\Gamma(\alphai) |
,
and then set
xi=
yi | |||||||||
|
.
The joint distribution of the independently sampled gamma variates,
\{yi\}
-\sumiyi | |
e |
\prod
K | |
i=1 |
| |||||||
\Gamma(\alphai) |
Next, one uses a change of variables, parametrising
\{yi\}
y1,y2,\ldots,yK-1
\sum
K | |
i=1 |
yi
y\tox
\barx=
K | |
style\sum | |
i=1 |
yi,x1=
y1 | |
\barx |
,x2=
y2 | |
\barx |
,\ldots,xK-1=
yK-1 | |
\barx |
0\leqx1,x2,\ldots,xk-1\leq1
0\leq
K-1 | |
style\sum | |
i=1 |
xi\leq1
P(x)=P(y(x))|
\partialy | |
\partialx |
|
| | \partialy |
\partialx |
|
y1=\barxx1,y2=\barxx2\ldotsyK-1=\barxxK-1,yK=\bar
K-1 | |
x(1-style\sum | |
i=1 |
xi)
\begin{vmatrix}\barx&0&\ldots&x1\ 0&\barx&\ldots&x2\ \vdots&\vdots&\ddots&\vdots\ -\barx&-\barx&\ldots&
K-1 | |
1-\sum | |
i=1 |
xi\end{vmatrix}
The determinant can be evaluated by noting that it remains unchanged if multiples of a row are added to another row, and adding each of the first K-1 rows to the bottom row to obtain
\begin{vmatrix}\barx&0&\ldots&x1\ 0&\barx&\ldots&x2\ \vdots&\vdots&\ddots&\vdots\ 0&0&\ldots&1\end{vmatrix}
which can be expanded about the bottom row to obtain the determinant value
\barxK-1
\begin{align} & |
| ||||||||||||||||||||||
|
\barxK-1e-\bar\\ =&
| x | |||||||||||||||||||||||
|
| |||||||
\Gamma(\bar\alpha) |
\end{align}
K\alpha | |
\bar\alpha=style\sum | |
i |
xi
\barx
x1,x2,\ldots,xK-1\sim
| ||||||||||||||||||||||||||
B(\boldsymbol{\alpha |
)}
Which is equivalent to
| |||||||||||||||
B(\boldsymbol{\alpha |
)}
K | |
\sum | |
i=1 |
xi=1
Below is example Python code to draw the sample:
This formulation is correct regardless of how the Gamma distributions are parameterized (shape/scale vs. shape/rate) because they are equivalent when scale and rate equal 1.0.
A less efficient algorithm[23] relies on the univariate marginal and conditional distributions being beta and proceeds as follows. Simulate
x1
rm{Beta}\left(\alpha1,
K | |
\sum | |
i=2 |
\alphai\right)
Then simulate
x2,\ldots,xK-1
j=2,\ldots,K-1
\phij
rm{Beta}\left(\alphaj,
K | |
\sum | |
i=j+1 |
\alphai\right),
and let
xj=
j-1 | |
\left(1-\sum | |
i=1 |
xi\right)\phij.
Finally, set
xK=1-\sum
K-1 | |
i=1 |
xi.
This iterative procedure corresponds closely to the "string cutting" intuition described above.
Below is example Python code to draw the sample:
When, a sample from the distribution can be found by randomly drawing a set of values independently and uniformly from the interval, adding the values and to the set to make it have values, sorting the set, and computing the difference between each pair of order-adjacent values, to give, ..., .
When, a sample from the distribution can be found by randomly drawing values independently from the standard normal distribution, squaring these values, and normalizing them by dividing by their sum, to give, ..., .
A point, ..., can be drawn uniformly at random from the -dimensional hypersphere (which is the surface of a -dimensional hyperball) via a similar procedure. Randomly draw values independently from the standard normal distribution and normalize these coordinate values by dividing each by the constant that is the square root of the sum of their squares.