In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a k-sided dice rolled n times. For n independent trials each of which leads to a success for exactly one of k categories, with each category having a given fixed success probability, the multinomial distribution gives the probability of any particular combination of numbers of successes for the various categories.
When k is 2 and n is 1, the multinomial distribution is the Bernoulli distribution. When k is 2 and n is bigger than 1, it is the binomial distribution. When k is bigger than 2 and n is 1, it is the categorical distribution. The term "multinoulli" is sometimes used for the categorical distribution to emphasize this four-way relationship (so n determines the suffix, and k the prefix).
The Bernoulli distribution models the outcome of a single Bernoulli trial. In other words, it models whether flipping a (possibly biased) coin one time will result in either a success (obtaining a head) or failure (obtaining a tail). The binomial distribution generalizes this to the number of heads from performing n independent flips (Bernoulli trials) of the same coin. The multinomial distribution models the outcome of n experiments, where the outcome of each trial has a categorical distribution, such as rolling a k-sided die n times.
Let k be a fixed finite number. Mathematically, we have k possible mutually exclusive outcomes, with corresponding probabilities p1, ..., pk, and n independent trials. Since the k outcomes are mutually exclusive and one must occur we have pi ≥ 0 for i = 1, ..., k and
k | |
\sum | |
i=1 |
pi=1
Suppose one does an experiment of extracting n balls of k different colors from a bag, replacing the extracted balls after each draw. Balls of the same color are equivalent. Denote the variable which is the number of extracted balls of color i (i = 1, ..., k) as Xi, and denote as pi the probability that a given extraction will be in color i. The probability mass function of this multinomial distribution is:
\begin{align} f(x1,\ldots,xk;n,p1,\ldots,pk)&{}=\Pr(X1=x1and...andXk=xk)\\ &{}=\begin{cases}{\displaystyle{n!\overx1! … xk!}p
x1 | |
1 |
x … x
xk | |
p | |
k |
for non-negative integers x1, ..., xk.
The probability mass function can be expressed using the gamma function as:
f(x1,...,xk;p1,\ldots,pk)=
\Gamma(\sumixi+1) | |
\prodi\Gamma(xi+1) |
k | |
\prod | |
i=1 |
xi | |
p | |
i |
.
This form shows its resemblance to the Dirichlet distribution, which is its conjugate prior.
Suppose that in a three-way election for a large country, candidate A received 20% of the votes, candidate B received 30% of the votes, and candidate C received 50% of the votes. If six voters are selected randomly, what is the probability that there will be exactly one supporter for candidate A, two supporters for candidate B and three supporters for candidate C in the sample?
Note: Since we’re assuming that the voting population is large, it is reasonable and permissible to think of the probabilities as unchanging once a voter is selected for the sample. Technically speaking this is sampling without replacement, so the correct distribution is the multivariate hypergeometric distribution, but the distributions converge as the population grows large in comparison to a fixed sample size[1] .
\Pr(A=1,B=2,C=3)=
6! | |
1!2!3! |
(0.21)(0.32)(0.53)=0.135
The multinomial distribution is normalized according to:
\sum | ||||||||||
|
f(x1,...,xk;n,p1,...,pk)=1
where the sum is over all permutations of
xj
k | |
\sum | |
j=1 |
xj=n
The expected number of times the outcome i was observed over n trials is
\operatorname{E}(Xi)=npi.
The covariance matrix is as follows. Each diagonal entry is the variance of a binomially distributed random variable, and is therefore
\operatorname{Var}(Xi)=npi(1-pi).
The off-diagonal entries are the covariances:
\operatorname{Cov}(Xi,Xj)=-npipj
for i, j distinct.
All covariances are negative because for fixed n, an increase in one component of a multinomial vector requires a decrease in another component.
When these expressions are combined into a matrix with i, j element
\operatorname{cov}(Xi,Xj),
The entries of the corresponding correlation matrix are
\rho(Xi,Xi)=1.
\rho(Xi,Xj)=
\operatorname{Cov | |
(X |
i,Xj)}{\sqrt{\operatorname{Var}(Xi)\operatorname{Var}(Xj)}}=
-pipj | |
\sqrt{pi(1-pi)pj(1-pj) |
Note that the number of trials n drops out of this expression.
Each of the k components separately has a binomial distribution with parameters n and pi, for the appropriate value of the subscript i.
The support of the multinomial distribution is the set
\{(n1,...,nk)\inNk\midn1+ … +nk=n\}.
Its number of elements is
{n+k-1\choosek-1}.
In matrix notation,
\operatorname{E}(X)=np,
and
\operatorname{Var}(X)=n\lbrace\operatorname{diag}(p)-pp\rm\rbrace,
with = the row vector transpose of the column vector .
Just like one can interpret the binomial distribution as (normalized) one-dimensional (1D) slices of Pascal's triangle, so too can one interpret the multinomial distribution as 2D (triangular) slices of Pascal's pyramid, or 3D/4D/+ (pyramid-shaped) slices of higher-dimensional analogs of Pascal's triangle. This reveals an interpretation of the range of the distribution: discretized equilateral "pyramids" in arbitrary dimension—i.e. a simplex with a grid.
Similarly, just like one can interpret the binomial distribution as the polynomial coefficients of
(p+q)n
(p1+p2+p3+ … +
n | |
p | |
k) |
See also: Sanov's theorem.
By Stirling's formula, at the limit of
n,x1,...,xk\toinfty
\hatpi=xi/n
\hatp
DKL
This formula can be interpreted as follows.
Consider
\Deltak
\{1,2,...,k\}
n
p
\hatp
By the asymptotic formula, the probability that empirical distribution
\hatp
p
nDKL(\hatp\|p)
\hatp
p
If
A
\Deltak
A
Pr(\hatp\inA\epsilon)
A\epsilon
Due to the exponential decay, at large
n
p
DKL
Theorem. At the
n\toinfty
n
k | |
\sum | |
i=1 |
| |||||||||
pi |
=
k | |
\sum | |
i=1 |
| |||||||||||||
npi |
\chi2(k-1)
The space of all distributions over categories
\{1,2,\ldots,k\}
\Deltak=\left\{(y1,\ldots,yk)\colony1,\ldots,yk\geq0,\sumiyi=1\right\}
n
\Deltak,=\left\{(x1/n,\ldots,xk/n)\colonx1,\ldots,xk\in\N,\sumixi=n\right\}
\Deltak
(\Zk)/n
As
n
\Deltak,
p
p
1/\sqrtn
1/n
n
\Deltak,
\Deltak
\rho(\hatp)=C
| ||||||||||||||||
e |
C
Finally, since the simplex
\Deltak
\Rk
(k-1)
The above concentration phenomenon can be easily generalized to the case where we condition upon linear constraints. This is the theoretical justification for Pearson's chi-squared test.
Theorem. Given frequencies
xi\inN
n
\ell+1
\hatpi=xi/n
q
I
p
n\toinfty
n\hatpi
2nDKL(\hatp\vert\vertq) ≈ n\sumi
| |||||||||
qi |
\chi2(k-1-\ell)
An analogous proof applies in this Diophantine problem of coupled linear equations in count variables
n\hatpi
\Deltak,
(\Zk)/n
\Deltak
\ell
\rho(\hatp)
(k-\ell-1)
DKL(\hatp\vert\vertp)
q
I
p
\Deltak,
I
n\hatpi
Notice that by definition, every one of
\hatp1,\hatp2,...,\hatpk
p1,p2,...,pk
[0,1]
n → infty
\hatpi
[0,1]
Away from empirically observed constraints
b1,\ldots,b\ell
Theorem.
f1,...,f\ell
p
(1,1,...,1),\nablaf1(p),...,\nablaf\ell(p)
\epsilon1(n),...,\epsilon\ell(n)
1n | |
\ll |
\epsiloni(n)\ll
1 | |
\sqrtn |
i\in\{1,...,\ell\}
f1(\hatp)\in[f1(p)-\epsilon1(n),f1(p)+\epsilon1(n)],...,f\ell(\hatp)\in[f\ell(p)-\epsilon\ell(n),f\ell(p)+\epsilon\ell(n)]
n\sumi
| |||||||||
pi |
=\sumi
| |||||||||||||
npi |
\chi2(k-1-\ell)
n\toinfty
In the case that all
\hatpi
In some fields such as natural language processing, categorical and multinomial distributions are synonymous and it is common to speak of a multinomial distribution when a categorical distribution is actually meant. This stems from the fact that it is sometimes convenient to express the outcome of a categorical distribution as a "1-of-k" vector (a vector with one element containing a 1 and all other elements containing a 0) rather than as an integer in the range
1...k
(\theta2,2\theta(1-\theta),(1-\theta)2)
The goal of equivalence testing is to establish the agreement between a theoretical multinomial distribution and observed counting frequencies. The theoretical distribution may be a fully specified multinomial distribution or a parametric family of multinomial distributions.
Let
q
p
p
q
d(p,q)<\varepsilon
d
\varepsilon>0
H0=\{d(p,q)\geq\varepsilon\}
H1=\{d(p,q)<\varepsilon\}
p
pn
n
pn
H0
H0
p
q
The distance between the true underlying distribution
p
l{M}
d(p,l{M})=minh\inl{M
H0=\{d(p,l{M})\geq\varepsilon\}
H1=\{d(p,l{M})<\varepsilon\}
d(p,l{M})
In the setting of a multinomial distribution, constructing confidence intervals for the difference between the proportions of observations from two events,
pi-pj
\hat{p}i=
Xi | |
n |
\hat{p}j=
Xj | |
n |
Some of the literature on the subject focused on the use-case of matched-pairs binary data, which requires careful attention when translating the formulas to the general case of
pi-pj
Wald's standard error (SE) of the difference of proportion can be estimated using:[9] [10]
\widehat{\operatorname{SE}(\hat{p}i-\hat{p}j)}=\sqrt{
(\hat{p | |
i |
+\hat{p}j)-(\hat{p}i-
2}{n}} | |
\hat{p} | |
j) |
For a
100(1-\alpha)\%
(\hat{p}i-\hat{p}j)\pmz\alpha/2 ⋅ \widehat{\operatorname{SE}(\hat{p}i-\hat{p}j)}
As the sample size (
n
(\hat{p | |
i |
-\hat{p}j)-(pi-pj)}{\widehat{\operatorname{SE}(\hat{p}i-\hat{p}j)}}
The SE can be constructed using the calculus of the variance of the difference of two random variables:
\begin{align} \widehat{\operatorname{SE}(\hat{p}i-\hat{p}j)}&=\sqrt{
\hat{p | |
i |
(1-\hat{p}i)}{n}+
\hat{p | |
j |
(1-\hat{p}j)}{n}-2\left(-
\hat{p | |
i |
\hat{p}j}{n}\right)}\\ &=\sqrt{
1 | |
n |
\left(\hat{p}i+\hat{p}j-
2 | |
\hat{p} | |
i |
-
2 | |
\hat{p} | |
j |
+2\hat{p}i\hat{p}j\right)}\\ &=\sqrt{
(\hat{p | |
i |
+\hat{p}j)-(\hat{p}i-
2}{n}} \end{align} | |
\hat{p} | |
j) |
A modification which includes a continuity correction adds
1 | |
n |
(\hat{p}i-\hat{p}j)\pm\left(z\alpha/2 ⋅ \widehat{\operatorname{SE}(\hat{p}i-\hat{p}j)}+
1 | |
n |
\right)
Another alternative is to rely on a Bayesian estimator using Jeffreys prior which leads to using a dirichlet distribution, with all parameters being equal to 0.5, as a prior. The posterior will be the calculations from above, but after adding 1/2 to each of the k elements, leading to an overall increase of the sample size by
k | |
2 |
This leads to the following SE:
\widehat{\operatorname{SE}(\hat{p}i-\hat{p}j)}
|
=\sqrt{
\left(\hat{p | |
i |
+\hat{p}j+
1 | \right) | |
n |
n | |||
|
-\left(\hat{p}i-
2 | ||
\hat{p} | \left( | |
j\right) |
n | |||
|
\right)2}{n+
k | |
2 |
\begin{align} \widehat{\operatorname{SE}(\hat{p}i-\hat{p}j)}
|
&=\sqrt{
| ||||||||||||||||||||||||||
|
Which can just be plugged into the original Wald formula as follows:
(pi-
p | |||||||
|
\pmz\alpha/2 ⋅ \widehat{\operatorname{SE}(\hat{p}i-\hat{p}j)}
|
For the case of matched-pairs binary data, a common task is to build the confidence interval of the difference of the proportion of the matched events. For example, we might have a test for some disease, and we may want to check the results of it for some population at two points in time (1 and 2), to check if there was a change in the proportion of the positives for the disease during that time.
Such scenarios can be represented using a two-by-two contingency table with the number of elements that had each of the combination of events. We can use small f for sampling frequencies:
f11,f10,f01,f00
F11,F10,F01,F00
Test 2 positive | Test 2 negative | Row total | ||
Test 1 positive | f11 | f10 | f1*=f11+f10 | |
Test 1 negative | f01 | f00 | f0*=f01+f00 | |
Column total | f*1=f11+f01 | f*0=f10+f00 | n |
In this case, checking the difference in marginal proportions means we are interested in using the following definitions:
p1*=
F1* | |
N |
=
F11+F10 | |
N |
p*1=
F*1 | |
N |
=
F11+F01 | |
N |
p*1-p1*=
F11+F01 | |
N |
-
F11+F10 | |
N |
=
F01 | |
N |
-
F10 | |
N |
=p01-p10
Hence, a confidence intervals for the marginal positive proportions (
p*1-p1*
p01-p10
Calculating a p-value for such a difference is known as McNemar's test. Building confidence interval around it can be constructed using methods described above for Confidence intervals for the difference of two proportions.
The Wald confidence intervals from the previous section can be applied to this setting, and appears in the literature using alternative notations. Specifically, the SE often presented is based on the contingency table frequencies instead of the sample proportions. For example, the Wald confidence intervals, provided above, can be written as:
\widehat{\operatorname{SE}(p*1-p1*)}=\widehat{\operatorname{SE}(p01-p10)}=
\sqrt{n(f10+f01)-(f10-f01)2 | |
Further research in the literature has identified several shortcomings in both the Wald and the Wald with continuity correction methods, and other methods have been proposed for practical application.
One such modification includes Agresti and Min’s Wald+2 (similar to some of their other works[13]) in which each cell frequency had an extra
1 | |
2 |
This leads to the following modified SE for the case of matched pairs data:
\widehat{\operatorname{SE}(p*1-p1*)}=
\sqrt{(n+2)(f10+f01+1)-(f10-f01)2 | |
Which can just be plugged into the original Wald formula as follows:
(p*1-p1*)
n | |
n+2 |
\pmz\alpha/2 ⋅ \widehat{\operatorname{SE}(\hat{p}i-\hat{p}j)}wald+2
Other modifications include Bonett and Price’s Adjusted Wald, and Newcombe’s Score.
First, reorder the parameters
p1,\ldots,pk
j=min\left\{j'\in\{1,...,k\}\colon
j' | |
\left(\sum | |
i=1 |
pi\right)-X\geq0\right\}.
is one observation from the multinomial distribution with
p1,\ldots,pk
Given the parameters
p1,p2,\ldots,pk
n
k | |
\sum | |
i=1 |
Xi=n
Xi
i
i