Kolmogorov's inequality explained

In probability theory, Kolmogorov's inequality is a so-called "maximal inequality" that gives a bound on the probability that the partial sums of a finite collection of independent random variables exceed some specified bound.

Statement of the inequality

Let X1, ..., Xn : Ω → R be independent random variables defined on a common probability space (Ω, F, Pr), with expected value E[''X''<sub>''k''</sub>] = 0 and variance Var[''X''<sub>''k''</sub>] < +∞ for k = 1, ..., n. Then, for each λ > 0,

\Pr\left(max1\leq|Sk|\geqλ\right)\leq

1
λ2

\operatorname{Var}[Sn]\equiv

1
λ2
n
\sum
k=1
\operatorname{Var}[X
k]=1
λ2
n
\sum
k=1
2],
E[X
k

where Sk = X1 + ... + Xk.

The convenience of this result is that we can bound the worst case deviation of a random walk at any point of time using its value at the end of time interval.

Proof

The following argument employs discrete martingales. As argued in the discussion of Doob's martingale inequality, the sequence

S1,S2,...,Sn

is a martingale.Define

(Zi)

n
i=0
as follows. Let

Z0=0

, and

Zi+1=\left\{\begin{array}{ll} Si+1&if\displaystylemax1|Sj|<λ\Zi&otherwise \end{array} \right.

for all

i

.Then

(Zi)

n
i=0
is also a martingale.

For any martingale

Mi

with

M0=0

, we have that
n
\begin{align} \sum
i=1

E[(Mi-Mi-1)2]&=

n
\sum
i=1

E[

2
M
i

-2MiMi-1+

2
M
i-1

]\\ &=

n
\sum
i=1

E\left[

2
M
i

-2(Mi-1+Mi-Mi-1)Mi-1+

2
M
i-1

\right]\\ &=

n
\sum
i=1

E\left[

2
M
i

-

2
M
i-1

\right]-2E\left[Mi-1(Mi-Mi-1)\right]\\ &=

2]
E[M
n

-

2]
E[M
0

=

2]. \end{align}
E[M
n

Applying this result to the martingale

(Si)

n
i=0
, we have

\begin{align} Pr\left(max1|Si|\geqλ\right)&= Pr[|Zn|\geqλ]\\ &\leq

1
λ2
2] =1
λ2
E[Z
n
n
\sum
i=1

E[(Zi-Zi-1)2]\\ &\leq

1
λ2
n
\sum
i=1

E[(Si-Si-1

2] =1
λ2
)
2]
E[S
n

=

1
λ2

Var[Sn] \end{align}

where the first inequality follows by Chebyshev's inequality.

This inequality was generalized by Hájek and Rényi in 1955.

See also

References