Multidimensional Chebyshev's inequality explained

In probability theory, the multidimensional Chebyshev's inequality[1] is a generalization of Chebyshev's inequality, which puts a bound on the probability of the event that a random variable differs from its expected value by more than a specified amount.

Let

X

be an

N

-dimensional random vector with expected value

\mu=\operatorname{E}[X]

and covariance matrix

V=\operatorname{E}[(X-\mu)(X-\mu)T].

If

V

is a positive-definite matrix, for any real number

t>0

:

\Pr\left(\sqrt{(X-\mu)TV-1(X-\mu)}>t\right)\le

N
t2

Proof

Since

V

is positive-definite, so is

V-1

. Define the random variable

y=(X-\mu)TV-1(X-\mu).

Since

y

is positive, Markov's inequality holds:

\Pr\left(\sqrt{(X-\mu)TV-1(X-\mu)}>t\right)=\Pr(\sqrt{y}>t)=\Pr(y>t2) \le

\operatorname{E
[y]}{t

2}.

Finally,

\begin{align} \operatorname{E}[y]&=\operatorname{E}[(X-\mu)TV-1(X-\mu)]\\[6pt] &=\operatorname{E}[\operatorname{trace}(V-1(X-\mu)(X-\mu)T)]\\[6pt] &=\operatorname{trace}(V-1V)=N \end{align}.

[1] [2]

Infinite dimensions

lX

(equipped with seminorms). This includes most common settings of vector-valued random variables, e.g., when

lX

is a Banach space (equipped with a single norm), a Hilbert space, or the finite-dimensional setting as described above.

Suppose that is of "strong order two", meaning that

\operatorname{E}\left(\|

2
X\|
\alpha

\right)<infty

for every seminorm . This is a generalization of the requirement that have finite variance, and is necessary for this strong form of Chebyshev's inequality in infinite dimensions. The terminology "strong order two" is due to Vakhania.[4]

Let

\mu\inlX

be the Pettis integral of (i.e., the vector generalization of the mean), and let

\sigmaa:=\sqrt{\operatorname{E}\|X-

2}
\mu\|
\alpha

be the standard deviation with respect to the seminorm . In this setting we can state the following:

General version of Chebyshev's inequality.

\forallk>0:\Pr\left(\|X-\mu\|\alpha\gek\sigma\alpha\right)\le

1
k2

.

Proof. The proof is straightforward, and essentially the same as the finitary version. If, then is constant (and equal to) almost surely, so the inequality is trivial.

If

\|X-\mu\|\alpha\gek

2
\sigma
\alpha

then, so we may safely divide by . The crucial trick in Chebyshev's inequality is to recognize that

1=\tfrac{\|X-

2}{\|X
\mu\|
\alpha

-

2}
\mu\|
\alpha
.

The following calculations complete the proof:

\begin{align} \Pr\left(\|X-\mu\|\alpha\gek\sigma\alpha\right)&=\int\Omega

1
\|X-\mu\|\alpha\gek\sigma\alpha

d\Pr\\ &=\int\Omega\left(

\|X-
2
\mu\|
\alpha
\|X-
2
\mu\|
\alpha

\right)

1
\|X-\mu\|\alpha\gek\sigma\alpha

d\Pr\\[6pt] &\le\int\Omega\left(

\|X-
2
\mu\|
\alpha
2
(k\sigma
\alpha)

\right)

1
\|X-\mu\|\alpha\gek\sigma\alpha

d\Pr\\[6pt] &\le

1
2
k
2
\sigma
\alpha

\int\Omega\|X-

2
\mu\|
\alpha

d\Pr&&

1
\|X-\mu\|\alpha\gek\sigma\alpha

\le1\\[6pt] &=

1
2
k
2
\sigma
\alpha

\left(\operatorname{E}\|X-

2
\mu\|
\alpha

\right)\\[6pt] &=

1
2
k
2
\sigma
\alpha

\left

2
(\sigma
\alpha

\right)\\[6pt] &=

1
k2

\end{align}

Notes and References

  1. Marshall . Albert W. . Olkin . Ingram . December 1960 . Multivariate Chebyshev Inequalities . The Annals of Mathematical Statistics . 31 . 4 . 1001–1014 . 10.1214/aoms/1177705673 . 0003-4851.
  2. Navarro . Jorge . A simple proof for the multivariate Chebyshev inequality . 2013-05-24 . math.ST . 1305.5646 .
  3. Ait-Haddou . Rachid . Mazure . Marie-Laurence . 2018-02-01 . The Fundamental Blossoming Inequality in Chebyshev Spaces—I: Applications to Schur Functions . Foundations of Computational Mathematics . en . 18 . 1 . 135–158 . 10.1007/s10208-016-9334-8 . 1615-3383.
  4. Vakhania, Nikolai Nikolaevich. Probability distributions on linear spaces. New York: North Holland, 1981.