Evidence lower bound explained

In variational Bayesian methods, the evidence lower bound (often abbreviated ELBO, also sometimes called the variational lower bound[1] or negative variational free energy) is a useful lower bound on the log-likelihood of some observed data.

The ELBO is useful because it provides a guarantee on the worst-case for the log-likelihood of some distribution (e.g.

p(X)

) which models a set of data. The actual log-likelihood may be higher (indicating an even better fit to the distribution) because the ELBO includes a Kullback-Leibler divergence (KL divergence) term which decreases the ELBO due to an internal part of the model being inaccurate despite good fit of the model overall. Thus improving the ELBO score indicates either improving the likelihood of the model

p(X)

or the fit of a component internal to the model, or both, and the ELBO score makes a good loss function, e.g., for training a deep neural network to improve both the model overall and the internal component. (The internal component is

q\phi(|x)

, defined in detail later in this article.)

Definition

Let

X

and

Z

be random variables, jointly distributed with distribution

p\theta

. For example,

p\theta(X)

is the marginal distribution of

X

, and

p\theta(Z\midX)

is the conditional distribution of

Z

given

X

. Then, for a sample

x\simpdata

, and any distribution

q\phi

, the ELBO is defined asL(\phi, \theta; x) := \mathbb E_ \left[\ln\frac{p_\theta(x, z)}{q_\phi(z|x)} \right] . The ELBO can equivalently be written as[2]

\beginL(\phi, \theta; x) = & \mathbb E_\left[\ln{} p_\theta(x, z) \right] + H[q_\phi(z|x) ] \\= & \mathbb \ln \,p_\theta(x) - D_(q_\phi(z|x) || p_\theta(z|x)) . \\\end

In the first line,

H[q\phi(z|x)]

is the entropy of

q\phi

, which relates the ELBO to the Helmholtz free energy.[3] In the second line,

lnp\theta(x)

is called the evidence for

x

, and

DKL(q\phi(z|x)||p\theta(z|x))

is the Kullback-Leibler divergence between

q\phi

and

p\theta

. Since the Kullback-Leibler divergence is non-negative,

L(\phi,\theta;x)

forms a lower bound on the evidence (ELBO inequality)\ln p_\theta(x) \ge \mathbb \mathbb E_\left[\ln\frac{p_\theta(x, z)}{q_\phi(z\vert x)} \right].

Motivation

Variational Bayesian inference

Suppose we have an observable random variable

X

, and we want to find its true distribution

p*

. This would allow us to generate data by sampling, and estimate probabilities of future events. In general, it is impossible to find

p*

exactly, forcing us to search for a good approximation.

That is, we define a sufficiently large parametric family

\{p\theta\}\theta\in\Theta

of distributions, then solve for

min\thetaL(p\theta,p*)

for some loss function

L

. One possible way to solve this is by considering small variation from

p\theta

to

p\theta

, and solve for

L(p\theta,p*)-L(p\theta+\delta,p*)=0

. This is a problem in the calculus of variations, thus it is called the variational method.

Since there are not many explicitly parametrized distribution families (all the classical distribution families, such as the normal distribution, the Gumbel distribution, etc, are far too simplistic to model the true distribution), we consider implicitly parametrized probability distributions:

p(z)

over a latent random variable

Z

. Usually a normal distribution or a uniform distribution suffices.

f\theta

(such as a deep neural network) parametrized by

\theta

.

f\theta(z)

into a simple distribution over the observable random variable

X

. For example, let

f\theta(z)=(f1(z),f2(z))

have two outputs, then we can define the corresponding distribution over

X

to be the normal distribution

lN(f1(z),

f2(z)
e

)

.

This defines a family of joint distributions

p\theta

over

(X,Z)

. It is very easy to sample

(x,z)\simp\theta

: simply sample

z\simp

, then compute

f\theta(z)

, and finally sample

x\simp\theta(|z)

using

f\theta(z)

.

In other words, we have a generative model for both the observable and the latent.Now, we consider a distribution

p\theta

good, if it is a close approximation of

p*

:p_\theta(X) \approx p^*(X)since the distribution on the right side is over

X

only, the distribution on the left side must marginalize the latent variable

Z

away.
In general, it's impossible to perform the integral

p\theta(x)=\intp\theta(x|z)p(z)dz

, forcing us to perform another approximation.

Since

p\theta(x)=

p\theta(x|z)p(z)
p\theta(z|x)
(Bayes' Rule), it suffices to find a good approximation of

p\theta(z|x)

. So define another distribution family

q\phi(z|x)

and use it to approximate

p\theta(z|x)

. This is a discriminative model for the latent.

The entire situation is summarized in the following table:

!

X

: observable!

X,Z

!

Z

: latent

p*(x)p\theta(x)

p\theta(x|z)p(z)
q\phi(z|x)
approximable

p(z)

, easy

p\theta(x

z)p(z), easy

p\theta(z

x) \approx q_\phi(zx) approximable

p\theta(x

z), easy
In Bayesian language,

X

is the observed evidence, and

Z

is the latent/unobserved. The distribution

p

over

Z

is the prior distribution over

Z

,

p\theta(x|z)

is the likelihood function, and

p\theta(z|x)

is the posterior distribution over

Z

.

Given an observation

x

, we can infer what

z

likely gave rise to

x

by computing

p\theta(z|x)

. The usual Bayesian method is to estimate the integral

p\theta(x)=\intp\theta(x|z)p(z)dz

, then compute by Bayes' rule

p\theta(z|x)=

p\theta(x|z)p(z)
p\theta(x)
. This is expensive to perform in general, but if we can simply find a good approximation

q\phi(z|x)p\theta(z|x)

for most

x,z

, then we can infer

z

from

x

cheaply. Thus, the search for a good

q\phi

is also called amortized inference.

All in all, we have found a problem of variational Bayesian inference.

Deriving the ELBO

A basic result in variational inference is that minimizing the Kullback–Leibler divergence (KL-divergence) is equivalent to maximizing the log-likelihood:\mathbb_[\ln p_\theta (x)] = -H(p^*) - D_(p^*(x) \| p_\theta(x))where

H(p*)=

-E
x\simp*

[lnp*(x)]

is the entropy of the true distribution. So if we can maximize
E
x\simp*(x)

[lnp\theta(x)]

, we can minimize

DKL(p*(x)\|p\theta(x))

, and consequently find an accurate approximation

p\thetap*

.

To maximize

E
x\simp*(x)

[lnp\theta(x)]

, we simply sample many

xi\simp*(x)

, i.e. use importance samplingN\max_\theta \mathbb_[\ln p_\theta (x)]\approx \max_\theta \sum_i \ln p_\theta (x_i)where

N

is the number of samples drawn from the true distribution. This approximation can be seen as overfitting.

In order to maximize

\sumilnp\theta(xi)

, it's necessary to find

lnp\theta(x)

:\ln p_\theta(x) = \ln \int p_\theta(x|z) p(z)dzThis usually has no closed form and must be estimated. The usual way to estimate integrals is Monte Carlo integration with importance sampling:\int p_\theta(x|z) p(z)dz = \mathbb E_\left[\frac{p_\theta (x, z)}{q_\phi(z|x)}\right]where

q\phi(z|x)

is a sampling distribution over

z

that we use to perform the Monte Carlo integration.

So we see that if we sample

z\simq\phi(|x)

, then
p\theta(x,z)
q\phi(z|x)
is an unbiased estimator of

p\theta(x)

. Unfortunately, this does not give us an unbiased estimator of

lnp\theta(x)

, because

ln

is nonlinear. Indeed, we have by Jensen's inequality, \ln p_\theta(x)= \ln \mathbb E_\left[\frac{p_\theta (x, z)}{q_\phi(z|x)}\right] \geq \mathbb E_\left[\ln\frac{p_\theta (x, z)}{q_\phi(z|x)}\right]In fact, all the obvious estimators of

lnp\theta(x)

are biased downwards, because no matter how many samples of

zi\simq\phi(|x)

we take, we have by Jensen's inequality:\mathbb E_\left[\ln \left(\frac 1N \sum_i \frac{p_\theta (x, z_i)}{q_\phi(z_i|x)}\right) \right] \leq \ln \mathbb E_\left[\frac 1N \sum_i \frac{p_\theta (x, z_i)}{q_\phi(z_i|x)} \right] = \ln p_\theta(x) Subtracting the right side, we see that the problem comes down to a biased estimator of zero:\mathbb E_\left[\ln \left(\frac 1N \sum_i \frac{p_\theta (z_i|x)}{q_\phi(z_i|x)}\right) \right] \leq 0At this point, we could branch off towards the development of an importance-weighted autoencoder, but we will instead continue with the simplest case with

N=1

:\ln p_\theta(x)= \ln \mathbb E_\left[\frac{p_\theta (x, z)}{q_\phi(z|x)}\right] \geq \mathbb E_\left[\ln\frac{p_\theta (x, z)}{q_\phi(z|x)}\right]The tightness of the inequality has a closed form:\ln p_\theta(x)- \mathbb E_\left[\ln\frac{p_\theta (x, z)}{q_\phi(z|x)}\right] = D_(q_\phi(\cdot | x)\| p_\theta(\cdot | x))\geq 0We have thus obtained the ELBO function:L(\phi, \theta; x) := \ln p_\theta(x) - D_(q_\phi(\cdot | x)\| p_\theta(\cdot | x))

Maximizing the ELBO

For fixed

x

, the optimization

max\theta,L(\phi,\theta;x)

simultaneously attempts to maximize

lnp\theta(x)

and minimize

DKL(q\phi(|x)\|p\theta(|x))

. If the parametrization for

p\theta

and

q\phi

are flexible enough, we would obtain some

\hat\phi,\hat\theta

, such that we have simultaneously

\ln p_(x) \approx \max_\theta \ln p_\theta(x); \quad q_(\cdot | x)\approx p_(\cdot | x)Since\mathbb_[\ln p_\theta (x)] = -H(p^*) - D_(p^*(x) \| p_\theta(x))we have\ln p_(x) \approx \max_\theta -H(p^*) - D_(p^*(x) \| p_\theta(x))and so\hat\theta \approx \arg\min D_(p^*(x) \| p_\theta(x))In other words, maximizing the ELBO would simultaneously allow us to obtain an accurate generative model

p\hat\thetap*

and an accurate discriminative model

q\hat\phi(|x)p\hat\theta(|x)

.

Main forms

The ELBO has many possible expressions, each with some different emphasis.

\mathbb_\left[\ln\frac{p_\theta(x, z)}{q_\phi(z|x)}\right] = \int q_\phi(z|x)\ln\fracdz

This form shows that if we sample

z\simq\phi(|x)

, then
lnp\theta(x,z)
q\phi(z|x)
is an unbiased estimator of the ELBO.

\ln p_\theta(x) - D_(q_\phi(\cdot | x) \;\|\; p_\theta(\cdot | x))

This form shows that the ELBO is a lower bound on the evidence

lnp\theta(x)

, and that maximizing the ELBO with respect to

\phi

is equivalent to minimizing the KL-divergence from

p\theta(|x)

to

q\phi(|x)

.

\mathbb_[\ln p_\theta(x|z)] - D_(q_\phi(\cdot | x) \;\|\; p)

This form shows that maximizing the ELBO simultaneously attempts to keep

q\phi(|x)

close to

p

and concentrate

q\phi(|x)

on those

z

that maximizes

lnp\theta(x|z)

. That is, the approximate posterior

q\phi(|x)

balances between staying close to the prior

p

and moving towards the maximum likelihood

\argmaxzlnp\theta(x|z)

.

H(q_\phi(\cdot | x)) + \mathbb_[\ln p_\theta(z|x)] + \ln p_\theta(x)

This form shows that maximizing the ELBO simultaneously attempts to keep the entropy of

q\phi(|x)

high, and concentrate

q\phi(|x)

on those

z

that maximizes

lnp\theta(z|x)

. That is, the approximate posterior

q\phi(|x)

balances between being a uniform distribution and moving towards the maximum a posteriori

\argmaxzlnp\theta(z|x)

.

Data-processing inequality

Suppose we take

N

independent samples from

p*

, and collect them in the dataset

D=\{x1,...,xN\}

, then we have empirical distribution

qD(x)=

1N
\sum

i

\delta
xi
.

Fitting

p\theta(x)

to

qD(x)

can be done, as usual, by maximizing the loglikelihood

lnp\theta(D)

:D_(q_D(x) \| p_\theta(x)) = -\frac 1N \sum_i \ln p_\theta(x_i) - H(q_D)= -\frac 1N \ln p_\theta(D) - H(q_D) Now, by the ELBO inequality, we can bound

lnp\theta(D)

, and thusD_(q_D(x) \| p_\theta(x)) \leq -\frac 1N L(\phi, \theta; D) - H(q_D)The right-hand-side simplifies to a KL-divergence, and so we get:D_(q_D(x) \| p_\theta(x)) \leq -\frac 1N \sum_i L(\phi, \theta; x_i) - H(q_D)= D_(q_(x, z); p_\theta(x, z))This result can be interpreted as a special case of the data processing inequality.

In this interpretation, maximizing

L(\phi,\theta;D)=\sumiL(\phi,\theta;xi)

is minimizing

DKL(qD,(x,z);p\theta(x,z))

, which upper-bounds the real quantity of interest

DKL(qD(x);p\theta(x))

via the data-processing inequality. That is, we append a latent space to the observable space, paying the price of a weaker inequality for the sake of more computationally efficient minimization of the KL-divergence.[4]

References

  1. Kingma. Diederik P.. Welling. Max. 2014-05-01. Auto-Encoding Variational Bayes. stat.ML. 1312.6114.
  2. Book: Goodfellow . Ian . Deep learning . Bengio . Yoshua . Courville . Aaron . 2016 . The MIT press . 978-0-262-03561-3 . Adaptive computation and machine learning . Cambridge, Mass . Chapter 19.
  3. Hinton . Geoffrey E . Zemel . Richard . 1993 . Autoencoders, Minimum Description Length and Helmholtz Free Energy . Advances in Neural Information Processing Systems . Morgan-Kaufmann . 6.
  4. Kingma . Diederik P. . Welling . Max . 2019-11-27 . An Introduction to Variational Autoencoders . Foundations and Trends in Machine Learning . English . 12 . 4 . Section 2.7 . 10.1561/2200000056 . 1935-8237. 1906.02691 . 174802445 .