In probability theory, Chebyshev's inequality (also called the Bienaymé–Chebyshev inequality) provides an upper bound on the probability of deviation of a random variable (with finite variance) from its mean. More specifically, the probability that a random variable deviates from its mean by more than
k\sigma
1/k2
k
\sigma
The rule is often called Chebyshev's theorem, about the range of standard deviations around the mean, in statistics. The inequality has great utility because it can be applied to any probability distribution in which the mean and variance are defined. For example, it can be used to prove the weak law of large numbers.
Its practical usage is similar to the 68–95–99.7 rule, which applies only to normal distributions. Chebyshev's inequality is more general, stating that a minimum of just 75% of values must lie within two standard deviations of the mean and 88.89% within three standard deviations for a broad range of different probability distributions.[1] [2]
The term Chebyshev's inequality may also refer to Markov's inequality, especially in the context of analysis. They are closely related, and some authors refer to Markov's inequality as "Chebyshev's First Inequality," and the similar one referred to on this page as "Chebyshev's Second Inequality."
Chebyshev's inequality is tight in the sense that for each chosen positive constant, there exists a random variable such that the inequality is in fact an equality.[3]
The theorem is named after Russian mathematician Pafnuty Chebyshev, although it was first formulated by his friend and colleague Irénée-Jules Bienaymé.[4] The theorem was first proved by Bienaymé in 1853[5] and more generally proved by Chebyshev in 1867.[6] [7] His student Andrey Markov provided another proof in his 1884 Ph.D. thesis.[8]
Chebyshev's inequality is usually stated for random variables, but can be generalized to a statement about measure spaces.
Let X (integrable) be a random variable with finite non-zero variance σ2 (and thus finite expected value μ).[9] Then for any real number,
\Pr(|X-\mu|\geqk\sigma)\leq
1 | |
k2 |
.
Only the case
k>1
k\leq1
1 | |
k2 |
\geq1
As an example, using
k=\sqrt{2}
(\mu-\sqrt{2}\sigma,\mu+\sqrt{2}\sigma)
1 | |
2 |
1 | |
2 |
Because it can be applied to completely arbitrary distributions provided they have a known finite mean and variance, the inequality generally gives a poor bound compared to what might be deduced if more aspects are known about the distribution involved.
k | Min. % within k standard deviations of mean | Max. % beyond k standard deviations from mean | |
---|---|---|---|
1 | 0% | 100% | |
50% | 50% | ||
1.5 | 55.56% | 44.44% | |
2 | 75% | 25% | |
2 | 87.5% | 12.5% | |
3 | 88.8889% | 11.1111% | |
4 | 93.75% | 6.25% | |
5 | 96% | 4% | |
6 | 97.2222% | 2.7778% | |
7 | 97.9592% | 2.0408% | |
8 | 98.4375% | 1.5625% | |
9 | 98.7654% | 1.2346% | |
10 | 99% | 1% |
Let (X, Σ, μ) be a measure space, and let f be an extended real-valued measurable function defined on X. Then for any real number t > 0 and 0 < p < ∞,
\mu(\{x\inX:|f(x)|\geqt\})\leq{1\overtp}\intX|f|pd\mu.
More generally, if g is an extended real-valued measurable function, nonnegative and nondecreasing, with
g(t) ≠ 0
\mu(\{x\inX:f(x)\geqt\})\leq{1\overg(t)}\intXg\circfd\mu.
This statement follows from the Markov inequality,
\mu(\{x\inX:|F(x)|\geq\varepsilon\})\leq
1\varepsilon | |
\int |
X|F|d\mu
F=g\circf
\varepsilon=g(t)
\mu(\{x\inX:g\circf(x)\geqg(t)\})=\mu(\{x\inX:f(x)\geqt\})
g(x)
|x|p
x\get
0
Suppose we randomly select a journal article from a source with an average of 1000 words per article, with a standard deviation of 200 words. We can then infer that the probability that it has between 600 and 1400 words (i.e. within
k=2
1/k2=1/4
As shown in the example above, the theorem typically provides rather loose bounds. However, these bounds cannot in general (remaining true for arbitrary distributions) be improved upon. The bounds are sharp for the following example: for any k ≥ 1,
X=\begin{cases} -1,&with
1 | |
2k2 |
\\ 0,&withprobability1-
1 | |
k2 |
\\ 1,&with
1 | |
2k2 |
\end{cases}
For this distribution, the mean μ = 0 and the standard deviation σ =, so
\Pr(|X-\mu|\gek\sigma)=\Pr(|X|\ge1)=
1 | |
k2 |
.
Markov's inequality states that for any real-valued random variable Y and any positive number a, we have
\Pr(|Y|\geqa)\leqE[|Y|]/a
Y=(X-\mu)2
a=(k\sigma)2
\Pr(|X-\mu|\geqk\sigma)=\Pr((X-\mu)2\geqk2\sigma2)\leq
E[(X-\mu)2] | |
k2\sigma2 |
=
\sigma2 | |
k2\sigma2 |
=
1 | |
k2 |
.
It can also be proved directly using conditional expectation:
\begin{align} \sigma2&=E[(X-\mu)2]\\[5pt] &=E[(X-\mu)2\midk\sigma\leq|X-\mu|]\Pr[k\sigma\leq|X-\mu|]+E[(X-\mu)2\midk\sigma>|X-\mu|]\Pr[k\sigma>|X-\mu|]\\[5pt] &\geq(k\sigma)2\Pr[k\sigma\leq|X-\mu|]+0 ⋅ \Pr[k\sigma>|X-\mu|]\\[5pt] &=k2\sigma2\Pr[k\sigma\leq|X-\mu|] \end{align}
Chebyshev's inequality can also be obtained directly from a simple comparison of areas, starting from the representation of an expected value as the difference of two improper Riemann integrals (last formula in the definition of expected value for arbitrary real-valued random variables).[10]
Several extensions of Chebyshev's inequality have been developed.
Selberg derived a generalization to arbitrary intervals.[11] Suppose X is a random variable with mean μ and variance σ2. Selberg's inequality states[12] that if
\beta\geq\alpha\geq0
\Pr(X\in[\mu-\alpha,\mu+\beta])\ge\begin{cases}
\alpha2 | |
\alpha2+\sigma2 |
&if\alpha(\beta-\alpha)\geq2\sigma2\
4\alpha\beta-4\sigma2 | |
(\alpha+\beta)2 |
&if2\alpha\beta\geq2\sigma2\geq\alpha(\beta-\alpha)\ 0&\sigma2\geq\alpha\beta\end{cases}
When
\alpha=\beta
See main article: Multidimensional Chebyshev's inequality.
Chebyshev's inequality naturally extends to the multivariate setting, where one has n random variables with mean and variance σi2. Then the following inequality holds.
n | |
\Pr\left(\sum | |
i=1 |
(Xi-
2 | |
\mu | |
i) |
\gek2
n | |
\sum | |
i=1 |
2 | |
\sigma | |
i |
\right)\le
1 | |
k2 |
This is known as the Birnbaum–Raymond–Zuckerman inequality after the authors who proved it for two dimensions.[14] This result can be rewritten in terms of vectors with mean, standard deviation σ = (σ1, σ2, ...), in the Euclidean norm .[15]
\Pr(\|X-\mu\|\gek\|\sigma\|)\le
1 | |
k2 |
.
One can also get a similar infinite-dimensional Chebyshev's inequality. A second related inequality has also been derived by Chen.[16] Let be the dimension of the stochastic vector and let be the mean of . Let be the covariance matrix and . Then
\Pr\left((X-\operatorname{E}(X))TS-1(X-\operatorname{E}(X))<k\right)\ge1-
n | |
k |
where YT is the transpose of .The inequality can be written in terms of the Mahalanobis distance as
\Pr\left(
2 | |
d | |
S(X,\operatorname{E}(X)) |
<k\right)\ge1-
n | |
k |
where the Mahalanobis distance based on S is defined by
dS(x,y)=\sqrt{(x-y)TS-1(x-y)}
Navarro[17] proved that these bounds are sharp, that is, they are the best possible bounds for that regions when we just know the mean and the covariance matrix of X.
Stellato et al.[18] showed that this multivariate version of the Chebyshev inequality can be easily derived analytically as a special case of Vandenberghe et al.[19] where the bound is computed by solving a semidefinite program (SDP).
If the variables are independent this inequality can be sharpened.[20]
\Pr\left
n | |
(cap | |
i=1 |
|Xi-\mui| | |
\sigmai |
\leki\right)\ge
n | |
\prod | |
i=1 |
\left(1-
1 | ||||||
|
\right)
Berge derived an inequality for two correlated variables .[21] Let be the correlation coefficient between X1 and X2 and let σi2 be the variance of . Then
\Pr\left(
2 | |
cap | |
i=1 |
\left[
|Xi-\mui| | |
\sigmai |
<k\right]\right)\ge1-
1+\sqrt{1-\rho2 | |
} |
{k2}.
This result can be sharpened to having different bounds for the two random variables[22] and having asymmetric bounds, as in Selberg's inequality.[23]
Olkin and Pratt derived an inequality for correlated variables.[24]
n | |
\Pr\left(cap | |
i=1 |
|Xi-\mui| | |
\sigmai |
<ki\right)\ge1-
1 | |
n2 |
\left(\sqrt{u}+\sqrt{n-1}\sqrt{n\sumi
1 | ||||||||
|
-u}\right)2
where the sum is taken over the n variables and
u=
n | |
\sum | |
i=1 |
1 | ||||||||
|
+
n | |
2\sum | |
i=1 |
\sumj<i
\rhoij | |
kikj |
where is the correlation between and .
Olkin and Pratt's inequality was subsequently generalised by Godwin.[25]
Mitzenmacher and Upfal[26] note that by applying Markov's inequality to the nonnegative variable
|X-\operatorname{E}(X)|n
\Pr\left(|X-\operatorname{E}(X)|\gek\operatorname{E}(|X-\operatorname{E}(X)|n
| ||||||
) |
\right)\le
1 | |
kn |
, k>0,n\geq2.
For n = 2 we obtain Chebyshev's inequality. For k ≥ 1, n > 4 and assuming that the nth moment exists, this bound is tighter than Chebyshev's inequality. This strategy, called the method of moments, is often used to prove tail bounds.
A related inequality sometimes known as the exponential Chebyshev's inequality[27] is the inequality
\Pr(X\ge\varepsilon)\lee\operatorname{E}\left(e\right), t>0.
Let be the cumulant generating function,
K(t)=log\left(\operatorname{E}\left(e\right)\right).
Taking the Legendre–Fenchel transformation of and using the exponential Chebyshev's inequality we have
-log(\Pr(X\ge\varepsilon))\ge\supt(t\varepsilon-K(t)).
This inequality may be used to obtain exponential inequalities for unbounded variables.[28]
If P(x) has finite support based on the interval, let where |x| is the absolute value of . If the mean of P(x) is zero then for all [29]
\operatorname{E | |
(|X| |
r)-kr}{Mr}\le\Pr(|X|\gek)\le
\operatorname{E | |
(| |
X|r)}{kr}.
The second of these inequalities with is the Chebyshev bound. The first provides a lower bound for the value of P(x).
Saw et al extended Chebyshev's inequality to cases where the population mean and variance are not known and may not exist, but the sample mean and sample standard deviation from N samples are to be employed to bound the expected value of a new drawing from the same distribution.[30] The following simpler version of this inequality is given by Kabán.[31]
\Pr(|X-m|\geks)\le
1 | |
N+1 |
\left\lfloor
N+1 | \left( | |
N |
N-1 | |
k2 |
+1\right)\right\rfloor
where X is a random variable which we have sampled N times, m is the sample mean, k is a constant and s is the sample standard deviation.
This inequality holds even when the population moments do not exist, and when the sample is only weakly exchangeably distributed; this criterion is met for randomised sampling. A table of values for the Saw–Yang–Mo inequality for finite sample sizes (N < 100) has been determined by Konijn.[32] The table allows the calculation of various confidence intervals for the mean, based on multiples, C, of the standard error of the mean as calculated from the sample. For example, Konijn shows that for N = 59, the 95 percent confidence interval for the mean m is where (this is 2.28 times larger than the value found on the assumption of normality showing the loss on precision resulting from ignorance of the precise nature of the distribution).
An equivalent inequality can be derived in terms of the sample mean instead,
\Pr(|X-m|\gekm)\le
N-1 | |
N |
1 | |
k2 |
s2 | |
m2 |
+
1 | |
N. |
A table of values for the Saw–Yang–Mo inequality for finite sample sizes (N < 100) has been determined by Konijn.[32]
For fixed N and large m the Saw–Yang–Mo inequality is approximately[33]
\Pr(|X-m|\geks)\le
1 | |
N+1 |
.
Beasley et al have suggested a modification of this inequality
\Pr(|X-m|\geks)\le
1 | |
k2(N+1) |
.
In empirical testing this modification is conservative but appears to have low statistical power. Its theoretical basis currently remains unexplored.
The bounds these inequalities give on a finite sample are less tight than those the Chebyshev inequality gives for a distribution. To illustrate this let the sample size N = 100 and let k = 3. Chebyshev's inequality states that at most approximately 11.11% of the distribution will lie at least three standard deviations away from the mean. Kabán's version of the inequality for a finite sample states that at most approximately 12.05% of the sample lies outside these limits. The dependence of the confidence intervals on sample size is further illustrated below.
For N = 10, the 95% confidence interval is approximately ±13.5789 standard deviations.
For N = 100 the 95% confidence interval is approximately ±4.9595 standard deviations; the 99% confidence interval is approximately ±140.0 standard deviations.
For N = 500 the 95% confidence interval is approximately ±4.5574 standard deviations; the 99% confidence interval is approximately ±11.1620 standard deviations.
For N = 1000 the 95% and 99% confidence intervals are approximately ±4.5141 and approximately ±10.5330 standard deviations respectively.
The Chebyshev inequality for the distribution gives 95% and 99% confidence intervals of approximately ±4.472 standard deviations and ±10 standard deviations respectively.
See main article: Samuelson's inequality. Although Chebyshev's inequality is the best possible bound for an arbitrary distribution, this is not necessarily true for finite samples. Samuelson's inequality states that all values of a sample must lie within sample standard deviations of the mean.
By comparison, Chebyshev's inequality states that all but a 1/N fraction of the sample will lie within standard deviations of the mean. Since there are N samples, this means that no samples will lie outside standard deviations of the mean, which is worse than Samuelson's inequality. However, the benefit of Chebyshev's inequality is that it can be applied more generally to get confidence bounds for ranges of standard deviations that do not depend on the number of samples.
An alternative method of obtaining sharper bounds is through the use of semivariances (partial variances). The upper (σ+2) and lower (σ−2) semivariances are defined as
2 | |
\sigma | |
+ |
=
\sumx>m(x-m)2 | |
n-1 |
,
2 | |
\sigma | |
- |
=
\sumx<m(m-x)2 | |
n-1 |
,
where m is the arithmetic mean of the sample and n is the number of elements in the sample.
The variance of the sample is the sum of the two semivariances:
\sigma2=
2 | |
\sigma | |
+ |
+
2. | |
\sigma | |
- |
In terms of the lower semivariance Chebyshev's inequality can be written[34]
\Pr(x\lem-a\sigma-)\le
1 | |
a2 |
.
Putting
a=
k\sigma | |
\sigma- |
.
Chebyshev's inequality can now be written
\Pr(x\lem-k\sigma)\le
1 | |
k2 |
| |||||||||
\sigma2 |
.
A similar result can also be derived for the upper semivariance.
If we put
2 | |
\sigma | |
u |
=
2, | |
max(\sigma | |
- |
2) | |
\sigma | |
+ |
,
Chebyshev's inequality can be written
\Pr(|x\lem-k\sigma|)\le
1 | |
k2 |
| |||||||||
\sigma2 |
.
Because σu2 ≤ σ2, use of the semivariance sharpens the original inequality.
If the distribution is known to be symmetric, then
2 | |
\sigma | |
+ |
=
2 | |
\sigma | |
- |
=
1 | |
2 |
\sigma2
and
\Pr(x\lem-k\sigma)\le
1 | |
2k2 |
.
This result agrees with that derived using standardised variables.
Stellato et al. simplified the notation and extended the empirical Chebyshev inequality from Saw et al. to the multivariate case. Let be a random variable and let . We draw iid samples of denoted as . Based on the first samples, we define the empirical mean as and the unbiased empirical covariance as . If
\SigmaN
λ\inR\geq
\begin{align} &PN+1\left((\xi(N+1)-
\top | |
\mu | |
N) |
-1 | |
\Sigma | |
N |
(\xi(N+1)-\muN)\geqλ2\right)\\[8pt] \leq{}&min\left\{1,
1 | |
N+1 |
\left\lfloor
| ||||||||||
N2λ2 |
\right\rfloor\right\}. \end{align}
In the univariate case, i.e. , this inequality corresponds to the one from Saw et al. Moreover, the right-hand side can be simplified by upper bounding the floor function by its argument
PN+1\left((\xi(N+1)-
\top | |
\mu | |
N) |
-1 | |
\Sigma | |
N |
(\xi(N+1)-\muN)\geqλ2\right)\leqmin\left\{1,
| ||||||||||
N2λ2 |
\right\}.
As , the right-hand side tends to which corresponds to the multivariate Chebyshev inequality over ellipsoids shaped according to and centered in .
Chebyshev's inequality is important because of its applicability to any distribution. As a result of its generality it may not (and usually does not) provide as sharp a bound as alternative methods that can be used if the distribution of the random variable is known. To improve the sharpness of the bounds provided by Chebyshev's inequality a number of methods have been developed; for a review see eg.[12] [37]
Cantelli's inequality[38] due to Francesco Paolo Cantelli states that for a real random variable (X) with mean (μ) and variance (σ2)
\Pr(X-\mu\gea)\le
\sigma2 | |
\sigma2+a2 |
where a ≥ 0.
This inequality can be used to prove a one tailed variant of Chebyshev's inequality with k > 0[39]
\Pr(X-\mu\geqk\sigma)\leq
1 | |
1+k2 |
.
The bound on the one tailed variant is known to be sharp. To see this consider the random variable X that takes the values
X=1
\sigma2 | |
1+\sigma2 |
X=-\sigma2
1 | |
1+\sigma2 |
.
Then E(X) = 0 and E(X2) = σ2 and P(X < 1) = 1 / (1 + σ2).
The one-sided variant can be used to prove the proposition that for probability distributions having an expected value and a median, the mean and the median can never differ from each other by more than one standard deviation. To express this in symbols let μ, ν, and σ be respectively the mean, the median, and the standard deviation. Then
\left|\mu-\nu\right|\leq\sigma.
There is no need to assume that the variance is finite because this inequality is trivially true if the variance is infinite.
The proof is as follows. Setting k = 1 in the statement for the one-sided inequality gives:
\Pr(X-\mu\geq\sigma)\leq
1 | |
2 |
\implies\Pr(X\geq\mu+\sigma)\leq
1 | |
2 |
.
Changing the sign of X and of μ, we get
\Pr(X\leq\mu-\sigma)\leq
1 | |
2 |
.
As the median is by definition any real number m that satisfies the inequalities
\Pr(X\leqm)\geq
1 | |
2 |
and\Pr(X\geqm)\geq
1 | |
2 |
this implies that the median lies within one standard deviation of the mean. A proof using Jensen's inequality also exists.
Bhattacharyya[40] extended Cantelli's inequality using the third and fourth moments of the distribution.
Let
\mu=0
\sigma2
\gamma=E[X3]/\sigma3
\kappa=E[X4]/\sigma4
If
k2-k\gamma-1>0
\Pr(X>k\sigma)\le
\kappa-\gamma2-1 | |
(\kappa-\gamma2-1)(1+k2)+(k2-k\gamma-1) |
.
The necessity of
k2-k\gamma-1>0
k
In the case
E[X3]=0
\Pr(X>k\sigma)\le
\kappa-1 | |
\kappa\left(k2+1\right)-2 |
fork>1.
\kappa-1 | |
\kappa\left(k2+1\right)-2 |
=
1 | - | |
2 |
\kappa(k-1) | |
2(\kappa-1) |
+O\left((k-1)2\right)
k
1 | - | |
2 |
k-1 | |
2 |
+O\left((k-1)2\right)
\kappa>1
wins a factor 2 over Chebyshev's inequality.
See main article: Gauss's inequality.
In 1823 Gauss showed that for a distribution with a unique mode at zero,[41]
\Pr(|X|\gek)\le
4\operatorname{E | |
( |
X2)}{9k2} if k2\ge
4 | |
3 |
\operatorname{E}(X2),
\Pr(|X|\gek)\le1-
k | |
\sqrt{3 |
\operatorname{E}(X2)} if k2\le
4 | |
3 |
\operatorname{E}(X2).
See main article: Vysochanskij–Petunin inequality.
The Vysochanskij–Petunin inequality generalizes Gauss's inequality, which only holds for deviation from the mode of a unimodal distribution, to deviation from the mean, or more generally, any center.[42] If X is a unimodal distribution with mean μ and variance σ2, then the inequality states that
\Pr(|X-\mu|\gek\sigma)\le
4 | |
9k2 |
if k\ge\sqrt{8/3}=1.633.
\Pr(|X-\mu|\gek\sigma)\le
4 | |
3k2 |
-
13 | |
if k\le\sqrt{8/3}.
For symmetrical unimodal distributions, the median and the mode are equal, so both the Vysochanskij–Petunin inequality and Gauss's inequality apply to the same center. Further, for symmetrical distributions, one-sided bounds can be obtained by noticing that
\Pr(X-\mu\gek\sigma)=\Pr(X-\mu\le-k\sigma)=
1 | |
2 |
\Pr(|X-\mu|\gek\sigma).
The additional fraction of
4/9
DasGupta has shown that if the distribution is known to be normal[43]
\Pr(|X-\mu|\gek\sigma)\le
1 | |
3k2 |
.
From DasGupta's inequality it follows that for a normal distribution at least 95% lies within approximately 2.582 standard deviations of the mean. This is less sharp than the true figure (approximately 1.96 standard deviations of the mean).
Several other related inequalities are also known.
See main article: Paley–Zygmund inequality.
The Paley–Zygmund inequality gives a lower bound on tail probabilities, as opposed to Chebyshev's inequality which gives an upper bound.[46] Applying it to the square of a random variable, we get
\Pr(|Z|>\theta\sqrt{E[Z2]})\ge
(1-\theta2)2E[Z2]2 | |
E[Z4] |
.
One use of Chebyshev's inequality in applications is to create confidence intervals for variates with an unknown distribution. Haldane noted,[47] using an equation derived by Kendall,[48] that if a variate (x) has a zero mean, unit variance and both finite skewness (γ) and kurtosis (κ) then the variate can be converted to a normally distributed standard score (z):
z=x-
\gamma | |
6 |
(x2-1)+
x | |
72 |
[2\gamma2(4x2-7)-3\kappa(x2-3)]+ …
This transformation may be useful as an alternative to Chebyshev's inequality or as an adjunct to it for deriving confidence intervals for variates with unknown distributions.
While this transformation may be useful for moderately skewed and/or kurtotic distributions, it performs poorly when the distribution is markedly skewed and/or kurtotic.
For any collection of non-negative independent random variables with expectation 1 [49]
\Pr\left(
| ||||||||||
n |
-1\ge
1 | |
n |
\right)\le
7 | |
8 |
.
There is a second (less well known) inequality also named after Chebyshev[50]
If f, g : [''a'', ''b''] → R are two monotonic functions of the same monotonicity, then
1 | |
b-a |
b | |
\int | |
a |
f(x)g(x)dx\ge\left[
1 | |
b-a |
b | |
\int | |
a |
f(x)dx\right]\left[
1 | |
b-a |
b | |
\int | |
a |
g(x)dx\right].
If f and g are of opposite monotonicity, then the above inequality works in the reverse way.
This inequality is related to Jensen's inequality,[51] Kantorovich's inequality,[52] the Hermite–Hadamard inequality[52] and Walter's conjecture.[53]
There are also a number of other inequalities associated with Chebyshev:
The Environmental Protection Agency has suggested best practices for the use of Chebyshev's inequality for estimating confidence intervals.[54]