Kolmogorov's two-series theorem explained

In probability theory, Kolmogorov's two-series theorem is a result about the convergence of random series. It follows from Kolmogorov's inequality and is used in one proof of the strong law of large numbers.

Statement of the theorem

Let

\left(Xn

infty
\right)
n=1
be independent random variables with expected values

E\left[Xn\right]=\mun

and variances

Var\left(Xn\right)=

2
\sigma
n
, such that
infty
\sum
n=1

\mun

converges in

R

and
infty
\sum
n=1
2
\sigma
n
converges in

R

. Then
infty
\sum
n=1

Xn

converges in

R

almost surely.

Proof

\mun=0

. Set

SN=

N
\sum
n=1

Xn

, and we will see that

\limsupNSN-\liminfNSN=0

with probability 1.

For every

m\inN

,\limsup_ S_N - \liminf_ S_N = \limsup_ \left(S_N - S_m \right) - \liminf_ \left(S_N - S_m \right) \leq 2 \max_ \left| \sum_^ X_ \right|

Thus, for every

m\inN

and

\epsilon>0

,\begin \mathbb \left(\limsup_ \left(S_N - S_m \right) - \liminf_ \left(S_N - S_m \right) \geq \epsilon \right) &\leq \mathbb \left(2 \max_ \left| \sum_^ X_ \right| \geq \epsilon \ \right) \\ &= \mathbb \left(\max_ \left| \sum_^ X_ \right| \geq \frac \ \right) \\ &\leq \limsup_ 4\epsilon^ \sum_^ \sigma_i^2 \\ &= 4\epsilon^ \lim_ \sum_^ \sigma_i^2 \end

While the second inequality is due to Kolmogorov's inequality.

By the assumption that

infty
\sum
n=1
2
\sigma
n
converges, it follows that the last term tends to 0 when

m\toinfty

, for every arbitrary

\epsilon>0

.

References