Divergence (statistics) explained

In information geometry, a divergence is a kind of statistical distance: a binary function which establishes the separation from one probability distribution to another on a statistical manifold.

The simplest divergence is squared Euclidean distance (SED), and divergences can be viewed as generalizations of SED. The other most important divergence is relative entropy (also called Kullback–Leibler divergence), which is central to information theory. There are numerous other specific divergences and classes of divergences, notably f-divergences and Bregman divergences (see).

Definition

M

of dimension

n

, a divergence on

M

is a

C2

-function

D:M x M\to[0,infty)

satisfying:

D(p,q)\geq0

for all

p,q\inM

(non-negativity),

D(p,q)=0

if and only if

p=q

(positivity),
  1. At every point

p\inM

,

D(p,p+dp)

is a positive-definite quadratic form for infinitesimal displacements

dp

from

p

.In applications to statistics, the manifold

M

is typically the space of parameters of a parametric family of probability distributions.

Condition 3 means that

D

defines an inner product on the tangent space

TpM

for every

p\inM

. Since

D

is

C2

on

M

, this defines a Riemannian metric

g

on

M

.

Locally at

p\inM

, we may construct a local coordinate chart with coordinates

x

, then the divergence is D(x(p), x(p) + dx) = \textstyle\frac dx^T g_p(x) dx + O(|dx|^3)where

gp(x)

is a matrix of size

n x n

. It is the Riemannian metric at point

p

expressed in coordinates

x

.

Dimensional analysis of condition 3 shows that divergence has the dimension of squared distance.

The dual divergence

D*

is defined as

D*(p,q)=D(q,p).

When we wish to contrast

D

against

D*

, we refer to

D

as primal divergence.

Given any divergence

D

, its symmetrized version is obtained by averaging it with its dual divergence:

DS(p,q)=

style1
2
(D(p,q)

+D(q,p)).

Difference from other similar concepts

Unlike metrics, divergences are not required to be symmetric, and the asymmetry is important in applications. Accordingly, one often refers asymmetrically to the divergence "of q from p" or "from p to q", rather than "between p and q". Secondly, divergences generalize squared distance, not linear distance, and thus do not satisfy the triangle inequality, but some divergences (such as the Bregman divergence) do satisfy generalizations of the Pythagorean theorem.

In general statistics and probability, "divergence" generally refers to any kind of function

D(p,q)

, where

p,q

are probability distributions or other objects under consideration, such that conditions 1, 2 are satisfied. Condition 3 is required for "divergence" as used in information geometry.

As an example, the total variation distance, a commonly used statistical divergence, does not satisfy condition 3.

Notation

Notation for divergences varies significantly between fields, though there are some conventions.

Divergences are generally notated with an uppercase 'D', as in

D(x,y)

, to distinguish them from metric distances, which are notated with a lowercase 'd'. When multiple divergences are in use, they are commonly distinguished with subscripts, as in

DKL

for Kullback–Leibler divergence (KL divergence).

Often a different separator between parameters is used, particularly to emphasize the asymmetry. In information theory, a double bar is commonly used:

D(p\parallelq)

; this is similar to, but distinct from, the notation for conditional probability,

P(A|B)

, and emphasizes interpreting the divergence as a relative measurement, as in relative entropy; this notation is common for the KL divergence. A colon may be used instead, as

D(p:q)

; this emphasizes the relative information supporting the two distributions.

The notation for parameters varies as well. Uppercase

P,Q

interprets the parameters as probability distributions, while lowercase

p,q

or

x,y

interprets them geometrically as points in a space, and

\mu1,\mu2

or

m1,m2

interprets them as measures.

Geometrical properties

Many properties of divergences can be derived if we restrict S to be a statistical manifold, meaning that it can be parametrized with a finite-dimensional coordinate system θ, so that for a distribution we can write .

For a pair of points with coordinates θp and θq, denote the partial derivatives of D(p, q) as

\begin{align} D((\partiali)p,q)  &\stackrel{def

}\ \ \tfrac D(p, q), \\ D((\partial_i\partial_j)_p, (\partial_k)_q) \ \ &\stackrel\ \ \tfrac \tfrac\tfracD(p, q), \ \ \mathrm \endNow we restrict these functions to a diagonal, and denote

\begin{align} D[\partiali,]&:p\mapstoD((\partiali)p,p),\\ D[\partiali,\partialj]&:p\mapstoD((\partiali)p,(\partialj)p),  etc. \end{align}

By definition, the function D(p, q) is minimized at, and therefore

\begin{align} &D[\partiali,]=D[,\partiali]=0,\\ &D[\partiali\partialj,]=D[,\partiali\partialj]=-D[\partiali,\partialj]

(D)
\equivg
ij

, \end{align}

where matrix g(D) is positive semi-definite and defines a unique Riemannian metric on the manifold S.

Divergence D(·, ·) also defines a unique torsion-free affine connection(D) with coefficients

(D)
\Gamma
ij,k

=-D[\partiali\partialj,\partialk],

and the dual to this connection ∇* is generated by the dual divergence D*.

Thus, a divergence D(·, ·) generates on a statistical manifold a unique dualistic structure (g(D), ∇(D), ∇(D*)). The converse is also true: every torsion-free dualistic structure on a statistical manifold is induced from some globally defined divergence function (which however need not be unique).

For example, when D is an f-divergence[1] for some function ƒ(·), then it generates the metric and the connection, where g is the canonical Fisher information metric, ∇(α) is the α-connection,, and .

Examples

The two most important divergences are the relative entropy (Kullback–Leibler divergence, KL divergence), which is central to information theory and statistics, and the squared Euclidean distance (SED). Minimizing these two divergences is the main way that linear inverse problems are solved, via the principle of maximum entropy and least squares, notably in logistic regression and linear regression.

The two most important classes of divergences are the f-divergences and Bregman divergences; however, other types of divergence functions are also encountered in the literature. The only divergence for probabilities over a finite alphabet that is both an f-divergence and a Bregman divergence is the Kullback–Leibler divergence.[2] The squared Euclidean divergence is a Bregman divergence (corresponding to the function) but not an f-divergence.

f-divergences

See main article: f-divergence. Given a convex function

f:[0,+infty)\to(-infty,+infty]

such that

f(0)=

\lim
t\to0+

f(t),f(1)=0

, the f-divergence generated by

f

is defined as

Df(p,q)=\intp(x)f(

q(x)
p(x)

)dx

.
Kullback–Leibler divergence

DKL(p,q)=\intp(x)ln\left(

p(x)
q(x)

\right)dx

squared Hellinger distance:

H2(p,q)=2\int(\sqrt{p(x)}-\sqrt{q(x)})2dx

Jensen–Shannon divergence

DJS(p,q)=

1
2

\intp(x)ln\left(p(x)\right)+q(x)ln\left(q(x)\right)-(p(x)+q(x))ln\left(

p(x)+q(x)
2

\right)dx

α-divergence

D(\alpha)(p,q)=

4
1-\alpha2

(1-\int

1-\alpha
2
p(x)
1+\alpha
2
q(x)

dx)

chi-squared divergence
D
\chi2

(p,q)=\int

(p(x)-q(x))2
p(x)

dx

(α,β)-product divergence:

D\alpha,\beta(p,q)=

2
(1-\alpha)(1-\beta)

\int(1-

1-\alpha
2
(\tfrac{q(x)}{p(x)})

)(1-

1-\beta
2
(\tfrac{q(x)}{p(x)})

)p(x)dx

Bregman divergences

See main article: Bregman divergence. Bregman divergences correspond to convex functions on convex sets. Given a strictly convex, continuously differentiable function on a convex set, known as the Bregman generator, the Bregman divergence measures the convexity of: the error of the linear approximation of from as an approximation of the value at :

DF(p,q)=F(p)-F(q)-\langle\nablaF(q),p-q\rangle.

The dual divergence to a Bregman divergence is the divergence generated by the convex conjugate of the Bregman generator of the original divergence. For example, for the squared Euclidean distance, the generator is, while for the relative entropy the generator is the negative entropy .

History

The use of the term "divergence" – both what functions it refers to, and what various statistical distances are called – has varied significantly over time, but by c. 2000 had settled on the current usage within information geometry, notably in the textbook .

The term "divergence" for a statistical distance was used informally in various contexts from c. 1910 to c. 1940. Its formal use dates at least to, entitled "On a measure of divergence between two statistical populations defined by their probability distributions", which defined the Bhattacharyya distance, and, entitled "On a Measure of Divergence between Two Multinomial Populations", which defined the Bhattacharyya angle. The term was popularized by its use for the Kullback–Leibler divergence in and its use in the textbook . The term "divergence" was used generally by for statistically distances. Numerous references to earlier uses of statistical distances are given in and .

actually used "divergence" to refer to the symmetrized divergence (this function had already been defined and used by Harold Jeffreys in 1948), referring to the asymmetric function as "the mean information for discrimination ... per observation", while referred to the asymmetric function as the "directed divergence". referred generally to such a function as a "coefficient of divergence", and showed that many existing functions could be expressed as f-divergences, referring to Jeffreys' function as "Jeffreys' measure of divergence" (today "Jeffreys divergence"), and Kullback–Leibler's asymmetric function (in each direction) as "Kullback's and Leibler's measures of discriminatory information" (today "Kullback–Leibler divergence").

The information geometry definition of divergence (the subject of this article) was initially referred to by alternative terms, including "quasi-distance" and "contrast function", though "divergence" was used in for the -divergence, and has become standard for the general class.

The term "divergence" is in contrast to a distance (metric), since the symmetrized divergence does not satisfy the triangle inequality. For example, the term "Bregman distance" is still found, but "Bregman divergence" is now preferred.

Notationally, denoted their asymmetric function as

I(1:2)

, while denote their functions with a lowercase 'd' as

d\left(P1,P2\right)

.

See also

References

Bibliography

Notes and References

  1. F. . Nielsen . R. . Nock . 2013 . On the Chi square and higher-order Chi distances for approximating f-divergences . 1309.3029 . 10.1109/LSP.2013.2288355 . 21 . IEEE Signal Processing Letters . 10–13. 4152365 .
  2. Jiao . Jiantao . Courtade . Thomas . No . Albert . Venkat . Kartik . Weissman . Tsachy . December 2014 . Information Measures: the Curious Case of the Binary Alphabet . IEEE Transactions on Information Theory . 60 . 12 . 7616–7626 . 10.1109/TIT.2014.2360184 . 0018-9448. 1404.6810 . 13108908 .