Normal-gamma distribution explained

In probability theory and statistics, the normal-gamma distribution (or Gaussian-gamma distribution) is a bivariate four-parameter family of continuous probability distributions. It is the conjugate prior of a normal distribution with unknown mean and precision.[1]

Definition

For a pair of random variables, (X,T), suppose that the conditional distribution of X given T is given by

X\midT\simN(\mu,1/(λT)),

\mu

and precision

λT

— equivalently, with variance

1/(λT).

Suppose also that the marginal distribution of T is given by

T\mid\alpha,\beta\sim\operatorname{Gamma}(\alpha,\beta),

where this means that T has a gamma distribution. Here λ, α and β are parameters of the joint distribution.

Then (X,T) has a normal-gamma distribution, and this is denoted by

(X,T)\sim\operatorname{NormalGamma}(\mu,λ,\alpha,\beta).

Properties

Probability density function

The joint probability density function of (X,T) is

f(x,\tau\mid\mu,\lambda,\alpha,\beta) = \frac \, \tau^\,e^\exp\left(-\frac\right)

Marginal distributions

By construction, the marginal distribution of

\tau

is a gamma distribution, and the conditional distribution of

x

given

\tau

is a Gaussian distribution. The marginal distribution of

x

is a three-parameter non-standardized Student's t-distribution with parameters

(\nu,\mu,\sigma2)=(2\alpha,\mu,\beta/(λ\alpha))

.

Exponential family

\alpha-1/2,-\beta\mu2/2,λ\mu,/2

and natural statistics

ln\tau,\tau,\taux,\taux2

.

Moments of the natural statistics

The following moments can be easily computed using the moment generating function of the sufficient statistic:

\operatorname{E}(lnT)=\psi\left(\alpha\right)-ln\beta,

where

\psi\left(\alpha\right)

is the digamma function,

\begin{align} \operatorname{E}(T)&=

\alpha
\beta

,\\[5pt] \operatorname{E}(TX)&=\mu

\alpha
\beta

,\\[5pt] \operatorname{E}(TX2)&=

1
λ

+\mu2

\alpha
\beta

. \end{align}

Scaling

If

(X,T)\simNormalGamma(\mu,λ,\alpha,\beta),

then for any

b>0,(bX,bT)

is distributed as

{\rmNormalGamma}(b\mu,λ/b3,\alpha,\beta/b).

Posterior distribution of the parameters

Assume that x is distributed according to a normal distribution with unknown mean

\mu

and precision

\tau

.

x\siml{N}(\mu,\tau-1)

and that the prior distribution on

\mu

and

\tau

,

(\mu,\tau)

, has a normal-gamma distribution

(\mu,\tau)\simNormalGamma(\mu0,λ0,\alpha0,\beta0),

for which the density satisfies

\pi(\mu,\tau)\propto

\alpha
0-1
2
\tau

\exp[-\beta0\tau]\exp\left[-

λ
2
0)
0\tau(\mu-\mu
2

\right].

Suppose

x1,\ldots,xn\mid\mu,\tau\sim\operatorname{{i.}{i.}{d.}}\operatornameN\left(\mu,\tau-1\right),

i.e. the components of

X=(x1,\ldots,xn)

are conditionally independent given

\mu,\tau

and the conditional distribution of each of them given

\mu,\tau

is normal with expected value

\mu

and variance

1/\tau.

The posterior distribution of

\mu

and

\tau

given this dataset

X

can be analytically determined by Bayes' theorem[2] explicitly,

P(\tau,\mu\midX)\proptoL(X\mid\tau,\mu)\pi(\tau,\mu),

where

L

is the likelihood of the parameters given the data.

Since the data are i.i.d, the likelihood of the entire dataset is equal to the product of the likelihoods of the individual data samples:

L(X\mid\tau,\mu)=

n
\prod
i=1

L(xi\mid\tau,\mu).

This expression can be simplified as follows:

\begin{align} L(X\mid\tau,\mu)&\propto

n
\prod
i=1

\tau1/2\exp\left[

-\tau
2
2\right]
(x
i-\mu)

\\[5pt] &\propto\taun/2\exp\left[

-\tau
2
2\right]
\sum
i-\mu)

\\[5pt] &\propto\taun/2\exp\left[

-\tau
2
n(x
\sum
i-\bar{x}

+\bar{x}-\mu)2\right]\\[5pt] &\propto\taun/2\exp\left[

-\tau
2
n
\sum
i=1
2
\left((x
i-\bar{x})

+(\bar{x}-\mu)2\right)\right]\\[5pt] &\propto\taun/2\exp\left[

-\tau
2

\left(ns+n(\bar{x}-\mu)2\right)\right], \end{align}

where

\bar{x}=

1
n
n
\sum
i=1

xi

, the mean of the data samples, and

s=

1
n
2
\sum
i-\bar{x})
, the sample variance.

The posterior distribution of the parameters is proportional to the prior times the likelihood.

\begin{align} P(\tau,\mu\midX)&\proptoL(X\mid\tau,\mu)\pi(\tau,\mu)\\ &\propto\taun/2\exp\left[

-\tau
2

\left(ns+n(\bar{x}-\mu)2\right)\right]

\alpha
0-1
2
\tau
\exp[{-\beta
0\tau}]\exp\left[-
λ
2
0)
0\tau(\mu-\mu
2

\right]\ &\propto

n+\alpha0-
1
2
2
\tau

\exp\left[-\tau\left(

1
2

ns+\beta0\right)\right]\exp\left[-

\tau
2

\left(λ0(\mu-\mu

2
0)

+n(\bar{x}-\mu)2\right)\right]\end{align}

The final exponential term is simplified by completing the square.

\begin{align} λ0(\mu-\mu

2
0)

+n(\bar{x}

2&
-\mu)
0

\mu2-2λ0\mu\mu0+λ0

2
\mu
0

+n\mu2-2n\bar{x}\mu+n\bar{x}2\\ &=(λ0+n)\mu2-2(λ0\mu0+n\bar{x})\mu+λ0

2
\mu
0

+n\bar{x}2\\ &=(λ0+n)(\mu2-2

λ0\mu0+n\bar{x
} \mu) + \lambda_0 \mu_0^2 +n \bar^2 \\&= (\lambda_0 + n)\left(\mu - \frac \right) ^2 + \lambda_0 \mu_0^2 +n \bar^2 - \frac \\&= (\lambda_0 + n)\left(\mu - \frac \right) ^2 + \frac\end

On inserting this back into the expression above,

\begin{align} P(\tau,\mu\midX)&\propto

n+\alpha0-
1
2
2
\tau

\exp\left[-\tau\left(

1
2

ns+\beta0\right)\right]\exp\left[-

\tau
2

\left(\left(λ0+n\right)\left(\mu-

λ0\mu0+n\bar{x
} \right)^2 + \frac \right) \right]\\& \propto \tau^ \exp \left[-\tau \left(\frac{1}{2} n s + \beta_0 + \frac{\lambda_0 n (\bar{x} - \mu_0)^2}{2(\lambda_0 +n)} \right) \right] \exp \left[- \frac{\tau}{2} \left(\lambda_0 + n \right) \left(\mu- \frac{\lambda_0 \mu_0 + n \bar{x}}{\lambda_0 + n} \right)^2 \right]\end

This final expression is in exactly the same form as a Normal-Gamma distribution, i.e.,

P(\tau,\mu\midX)=NormalGamma\left(

λ0\mu0+n\bar{x
}, \lambda_0 + n, \alpha_0+\frac, \beta_0+ \frac\left(n s + \frac \right) \right)

Interpretation of parameters

The interpretation of parameters in terms of pseudo-observations is as follows:

2\alpha

pseudo-observations (i.e. possibly a different number of pseudo-observations, to allow the variance of the mean and precision to be controlled separately) with sample mean

\mu

and sample variance
\beta
\alpha
(i.e. with sum of squared deviations

2\beta

).

λ0

) simply by adding the corresponding number of new observations (

n

).

As a consequence, if one has a prior mean of

\mu0

from

n\mu

samples and a prior precision of

\tau0

from

n\tau

samples, the prior distribution over

\mu

and

\tau

is

P(\tau,\mu\midX)=\operatorname{NormalGamma}\left(\mu0,n\mu,

n\tau
2

,

n\tau
2\tau0

\right)

and after observing

n

samples with mean

\mu

and variance

s

, the posterior probability is

P(\tau,\mu\midX)=NormalGamma\left(

n\mu\mu0+n\mu
n\mu+n

,n\mu+n,

1
2

(n\tau+n),

1\left(
2
n\tau
\tau0

+ns+

nn
2
(\mu-\mu
0)
\mu
n\mu+n

\right)\right)

Note that in some programming languages, such as Matlab, the gamma distribution is implemented with the inverse definition of

\beta

, so the fourth argument of the Normal-Gamma distribution is

2\tau0/n\tau

.

Generating normal-gamma random variates

Generation of random variates is straightforward:

  1. Sample

\tau

from a gamma distribution with parameters

\alpha

and

\beta

  1. Sample

x

from a normal distribution with mean

\mu

and variance

1/(λ\tau)

Related distributions

References

Notes and References

  1. Bernardo & Smith (1993, pages 136, 268, 434)
  2. Web site: Bayes' Theorem: Introduction . 2014-08-05 . live . https://web.archive.org/web/20140807091855/http://www.trinity.edu/cbrown/bayesweb/ . 2014-08-07 .