In probability theory, Cantelli's inequality (also called the Chebyshev-Cantelli inequality and the one-sided Chebyshev inequality) is an improved version of Chebyshev's inequality for one-sided tail bounds.[1] [2] [3] The inequality states that, for
λ>0,
\Pr(X-E[X]\geλ) \le
\sigma2 | |
\sigma2+λ2 |
,
where
X
\Pr
E[X]
X
\sigma2
X
Applying the Cantelli inequality to
-X
\Pr(X-E[X]\le-λ) \le
\sigma2 | |
\sigma2+λ2 |
.
While the inequality is often attributed to Francesco Paolo Cantelli who published it in 1928,[4] it originates in Chebyshev's work of 1874.[5] When bounding the event random variable deviates from its mean in only one direction (positive or negative), Cantelli's inequality gives an improvement over Chebyshev's inequality. The Chebyshev inequality has "higher moments versions" and "vector versions", and so does the Cantelli inequality.
For one-sided tail bounds, Cantelli's inequality is better, since Chebyshev's inequality can only get
\Pr(X-E[X]\geqλ)\leq\Pr(|X-E[X]|\geλ) \le
\sigma2 | |
λ2 |
.
On the other hand, for two-sided tail bounds, Cantelli's inequality gives
\Pr(|X-E[X]|\geλ) =\Pr(X-E[X]\geλ)+\Pr(X-E[X]\le-λ) \le
2\sigma2 | |
\sigma2+λ2 |
,
which is always worse than Chebyshev's inequality (when
λ\geq\sigma
Let
X
\sigma2
\mu
Y=X-E[X]
E[Y]=0
\operatorname{Var}(Y)=\sigma2
Then, for any
u\geq0
\Pr(X-E[X]\geqλ) =\Pr(Y\geqλ)=\Pr(Y+u\geqλ+u) \leq\Pr((Y+u)2\geq(λ+u)2) \leq
E[(Y+u)2] | |
(λ+u)2 |
=
\sigma2+u2 | |
(λ+u)2 |
.
u\inR
u\geq0\mapsto
\sigma2+u2 | |
(λ+u)2 |
u\ast=
\sigma2 | |
λ |
\Pr(X-E[X]\geqλ)\leq
| |||||||||||||
|
=
\sigma2 | |
λ2+\sigma2 |
λ>0
Various stronger inequalities can be shown.He, Zhang, and Zhang showed[6] (Corollary 2.3) when
E[X]=0,E[X2]=1
λ\ge0
\Pr(X\geλ)\le1-(2\sqrt{3}-3)
(1+λ2)2 | |
E[X4]+6λ2+λ4 |
.
In the case
λ=0
\Pr(X\ge0)\ge
2\sqrt{3 | |
-3}{E[X |
4]}.
E[X]=0