Glivenko–Cantelli theorem explained

In the theory of probability, the Glivenko–Cantelli theorem (sometimes referred to as the Fundamental Theorem of Statistics), named after Valery Ivanovich Glivenko and Francesco Paolo Cantelli, describes the asymptotic behaviour of the empirical distribution function as the number of independent and identically distributed observations grows.[1] Specifically, the empirical distribution function converges uniformly to the true distribution function almost surely.

The uniform convergence of more general empirical measures becomes an important property of the Glivenko–Cantelli classes of functions or sets.[2] The Glivenko–Cantelli classes arise in Vapnik–Chervonenkis theory, with applications to machine learning. Applications can be found in econometrics making use of M-estimators.

Statement

Assume that

X1,X2,...

are independent and identically distributed random variables in

R

with common cumulative distribution function

F(x)

. The empirical distribution function for

X1,...,Xn

is defined by
F
n(x)=1
n
n
\sum
i=1
I
[Xi,infty)

(x)=

1
n

l|\left\{i\midXi\leqx, 1\leqi\leqn\right\}r|

where

IC

is the indicator function of the set

C~.

For every (fixed)

x,

Fn(x)

is a sequence of random variables which converge to

F(x)

almost surely by the strong law of large numbers. Glivenko and Cantelli strengthened this result by proving uniform convergence of

Fn

to

F~.

Theorem

\|Fn-F\|infty=\supxl|Fn(x)-F(x)r|\longrightarrow0

almost surely.[3]

This theorem originates with Valery Glivenko[4] and Francesco Cantelli,[5] in 1933.

Remarks:

Xn

is a stationary ergodic process, then

Fn(x)

converges almost surely to

F(x)=\operatornameEl[

1
X1\lex

r]~.

The Glivenko–Cantelli theorem gives a stronger mode of convergence than this in the iid case.

Proof

For simplicity, consider a case of continuous random variable

X

. Fix

-infty=x0<x1<<xm-1<xm=infty

such that

F(xj)-F(xj-1)=

1
m
for

j=1,...,m

. Now for all

x\inR

there exists

j\in\{1,...,m\}

such that

x\in[xj-1,xj]

.

\begin{align} Fn(x)-F(x)&\leqFn(xj)-F(xj-1)=Fn(xj)-F(x

j)+1m,\\ F
n(x)-F(x)

&\geqFn(xj-1)-F(xj)=Fn(xj-1)-F(xj-1)-

1m. \end{align}

Therefore,

\|Fn-F\|infty=\supx\in|Fn(x)-F(x)|\leqmaxj\in\{1,...,m\

} |F_n(x_j)-F(x_j)| + \frac1m.

Since \max_ |F_n(x_j)-F(x_j)| \to 0 \text by strong law of large numbers, we can guarantee that for any positive \varepsilon and any integer m such that 1/m<\varepsilon, we can find N such that for all

n\geqN

, we have \max_ |F_n(x_j)-F(x_j)|\leq \varepsilon-1/m \text. Combined with the above result, this further implies that \|F_n-F\|_\infty \leq \varepsilon \text, which is the definition of almost sure convergence.

Empirical measures

One can generalize the empirical distribution function by replacing the set

(-infty,x]

by an arbitrary set C from a class of sets

l{C}

to obtain an empirical measure indexed by sets

C\inl{C}.

P
n(C)=1
n
n
\sum
i=1

IC(Xi),C\inl{C}

Where

IC(x)

is the indicator function of each set

C

.

Further generalization is the map induced by

Pn

on measurable real-valued functions f, which is given by

f\mapstoPnf=\intSfdPn=

1
n
n
\sum
i=1

f(Xi),f\inl{F}.

Then it becomes an important property of these classes whether the strong law of large numbers holds uniformly on

l{F}

or

l{C}

.

Glivenko–Cantelli class

Consider a set

l{S} 

with a sigma algebra of Borel subsets and a probability measure

P~.

For a class of subsets,

l{C}\subsetl\{C:Cismeasurablesubsetofl{S}r\}

and a class of functions

l{F}\subsetl\{f:l{S}\toR,fismeasurabler\}

define random variables

l\|Pn-Pr\|lC=\supC

} \bigl| \mathbb_n(C) - \mathbb(C) \bigr|

l\|Pn-Pr\|lF=\supf

} \bigl| \mathbb_n f - \mathbb f \bigr|

where

Pn(C)

is the empirical measure,

Pnf

is the corresponding map, and

Pf=\intl{S}fdP,

assuming that it exists.

Definitions

lC

is called a Glivenko–Cantelli class (or GC class, or sometimes strong GC class) with respect to a probability measure if

l\|Pn-Pr\|l{C}\to0 

almost surely as

n\toinfty~.

lC

is a weak Glivenko-Cantelli class with respect to if it instead satisfies the weaker condition

l\|Pn-Pr\|l{C}\to0 

in probability as

n\toinfty~.

P

on

(l{S},A)

.

P

on

(l{S},A)

: For every

\varepsilon>0

,

\supP,A)}\Pr\left(l\|Pn-Pr\|l{C}>\varepsilon\right)\to0 

as

n\toinfty~.

\varepsilon>0

,

\supP,A)}\Pr\left(\supml\|Pm-Pr\|l{C}>\varepsilon\right)\to0 

as

n\toinfty~.

Glivenko–Cantelli classes of functions (as well as their uniform and universal forms) are defined similarly, replacing all instances of

l{C}

with

l{F}

.

The weak and strong versions of the various Glivenko-Cantelli properties often coincide under certain regularity conditions. The following definition commonly appears in such regularity conditions:

l{F}

is image-admissible Suslin if there exists a Suslin space

\Omega

and a surjection

T:\Omegal{F}

such that the map

(x,y)\mapsto[T(y)](x)

is measurable

l{X} x \Omega

.

l{C}

is image-admissible Suslin if the class of functions

\{1C\midC\inl{C}\}

is image-admissible Suslin, where

1C

denotes the indicator function for the set

C

.

Theorems

The following two theorems give sufficient conditions for the weak and strong versions of the Glivenko-Cantelli property to be equivalent.

Theorem (Talagrand, 1987)[6]

Let

l{F}

be a class of functions that is integrable

P

, and define

l{F}0=\{f-Pf\midf\inl{F}\}

. Then the following are equivalent:

l{F}

is a weak Glivenko-Cantelli class and

l{F}0

is dominated by an integrable function

l{F}

is a Glivenko-Cantelli class

Theorem (Dudley, Giné, and Zinn, 1991)[7]

Suppose that a function class

l{F}

is bounded. Also suppose that the set

l{F}0=\{f-inff\midf\inl{F}\}

is image-admissible Suslin. Then

l{F}

is a weak uniform Glivenko-Cantelli class if and only if it is a strong uniform Glivenko-Cantelli class.

The following theorem is central to statistical learning of binary classification tasks.

Theorem (Vapnik and Chervonenkis, 1968)[8]

Under certain consistency conditions, a universally measurable class of sets

l{C} 

is a uniform Glivenko-Cantelli class if and only if it is a Vapnik–Chervonenkis class.

There exist a variety of consistency conditions for the equivalence of uniform Glivenko-Cantelli and Vapnik-Chervonenkis classes. In particular, either of the following conditions for a class

l{C}

suffice:[9]

l{C}

is image-admissible Suslin.

l{C}

is universally separable: There exists a countable subset

l{C0}

of

l{C}

such that each set

C\inl{C}

can be written as the pointwise limit of sets in

l{C}0

.

Examples

S=R

and

{lC}=\{(-infty,t]:t\in{R}\}

. The classical Glivenko–Cantelli theorem implies that this class is a universal GC class. Furthermore, by Kolmogorov's theorem,

\supP\in(S,A)}\|Pn-P\|lC\simn-1/2

, that is

l{C}

is uniformly Glivenko–Cantelli class.

l{C}

be a class of all finite subsets in S. Because

An=\{X1,\ldots,Xn\}\inl{C}

,

P(An)=0

,

Pn(An)=1

, we have that

\|Pn-P\|lC=1

and so

l{C}

is not a GC class with respect to P.

See also

Further reading

Notes and References

  1. Howard G.Tucker. 1959 . A Generalization of the Glivenko–Cantelli Theorem . The Annals of Mathematical Statistics . 30. 3 . 10.1214/aoms/1177706212 . 828–830. 2237422 . free .
  2. Book: van der Vaart, A. W. . 1998 . Asymptotic Statistics . limited . Cambridge University Press . 978-0-521-78450-4 . 279 .
  3. Book: van der Vaart, A.W. . 1998 . Asymptotic Statistics . Cambridge University Press . 978-0-521-78450-4 . limited.
  4. Glivenko, V. . 1933 . Sulla determinazione empirica delle leggi di probabilità . it . Giorn. Ist. Ital. Attuari . 4 . 92-99.
  5. Cantelli, F.P. . Francesco Paolo Cantelli . 1933 . Sulla determinazione empirica delle leggi di probabilità . Giorn. Ist. Ital. Attuari . 4 . 421-424.
  6. Talagrand . M. . Michel Talagrand . 1987 . The Glivenko-Cantelli Problem . Annals of Probability . 15 . 837-870 . 10.1214/AOP/1176992069.
  7. Dudley . Richard M. . Richard M. Dudley . Giné . Eva . Zinn . Joel C. . 1991 . Uniform and universal Glivenko-Cantelli classes . Journal of Theoretical Probability . 4 . 485-510 . 10.1007/BF01210321.
  8. Vapnik . V.N. . Vladimir Vapnik . Chervonenkis . A.Ya. . Alexey Chervonenkis . 1971 . On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities. Theory of Probability & Its Applications. 16 . 2 . 264–280 . 10.1137/1116025 .
  9. Pestov . Vladimir . PAC learnability versus VC dimension: A footnote to a basic result of statistical learning . The 2011 International Joint Conference on Neural Networks . 1141-1145 . 2011 . 10.1109/IJCNN.2011.6033352. 1104.2097 .