Sub-Gaussian distribution explained

In probability theory, a subgaussian distribution, the distribution of a subgaussian random variable, is a probability distribution with strong tail decay. More specifically, the tails of a subgaussian distribution are dominated by (i.e. decay at least as fast as) the tails of a Gaussian. This property gives subgaussian distributions their name.

Often in analysis, we divide an object (such as a random variable) into two parts, a central bulk and a distant tail, then analyze each separately. In probability, this division usually goes like "Everything interesting happens near the center. The tail event is so rare, we may safely ignore that." Subgaussian distributions are worthy of study, because the gaussian distribution is well-understood, and so we can give sharp bounds on the rarity of the tail event. Similarly, the subexponential distributions are also worthy of study.

Formally, the probability distribution of a random variable

X

is called subgaussian if there is a positive constant C such that for every

t\geq0

,

\operatorname(|X| \geq t) \leq 2 \exp.There are many equivalent definitions. For example, a random variable

X

is sub-Gaussian iff its distribution function is bounded from above (up to a constant) by the distribution function of a Gaussian:

P(|X|\geqt)\leqcP(|Z|\geqt)\forallt>0

where

c\ge0

is constant and

Z

is a mean zero Gaussian random variable.

Definitions

Subgaussian norm

The subgaussian norm of

X

, denoted as

\VertX

\Vert
\psi2

, is\Vert X \Vert_ = \inf\left\.In other words, it is the Orlicz norm of

X

generated by the Orlicz function
u2
\Phi(u)=e

-1.

By condition

(2)

below, subgaussian random variables can be characterized as those random variables with finite subgaussian norm.

Variance proxy

If there exists some

s2

such that

\operatorname{E}[e(X-\operatorname{E[X])t}]\leq

s2t2
2
e
for all

t

, then

s2

is called a variance proxy, and the smallest such

s2

is called the optimal variance proxy and denoted by

\Vert

2
X\Vert
vp
.

Since

\operatorname{E}[e(X-\operatorname{E[X])t}]=

\sigma2t2
2
e
when

X\siml{N}(\mu,\sigma2)

is Gaussian, we then have
2
\|X\|
vp

=\sigma2

, as it should.

Equivalent definitions

Let

X

be a random variable. The following conditions are equivalent: (Proposition 2.5.2 [1])
  1. Tail probability bound:

\operatorname{P}(|X|\geqt)\leq2

2)}
\exp{(-t
1
for all

t\geq0

, where

K1

is a positive constant;
  1. Finite subgaussian norm:

\VertX

\Vert
\psi2

=K2<infty

.
  1. Moment:

\operatorname{E}|X|p\leq

p
2K\Gamma\left(
3
p
2

+1\right)

for all

p\geq1

, where

K3

is a positive constant and

\Gamma

is the Gamma function.
  1. Moment:

\operatorname{E}|X|p\leqKppp/2

for all

p\geq1

,
  1. Moment-generating function (of

X

), or variance proxy :

\operatorname{E}[e(X-\operatorname{E[X])t}]\leq

K2t2
2
e
for all

t

, where

K

is a positive constant.
  1. Moment-generating function (of

X2

): for some

K>0

,
X2t2
\operatorname{E}[e

]\leq

K2t2
e
for all

t\in[-1/K,+1/K]

.
  1. Union bound: for some c > 0,

\operatorname{E}[max\{|X1-\operatorname{E}[X]|,\ldots,|Xn-\operatorname{E}[X]|\}]\leqc\sqrt{logn}

for all n > c, where

X1,\ldots,Xn

are i.i.d copies of X.
  1. Subexponential:

X2

has a subexponential distribution.Furthermore, the constant

K

is the same in the definitions (1) to (5), up to an absolute constant. So for example, given a random variable satisfying (1) and (2), the minimal constants

K1,K2

in the two definitions satisfy

K1\leqcK2,K2\leqc'K1

, where

c,c'

are constants independent of the random variable.

Proof of equivalence

As an example, the first four definitions are equivalent by the proof below.

Proof.

(1)\implies(3)

By the layer cake representation,\begin\operatorname |X|^p &= \int_0^\infty \operatorname(|X|^p \geq t) dt\\&= \int_0^\infty pt^\operatorname(|X| \geq t) dt\\&\leq 2\int_0^\infty pt^\exp\left(-\frac\right) dt\\\end

After a change of variables

2
u=t
1
, we find that\begin\operatorname |X|^p &\leq 2K_1^p \frac\int_0^\infty u^e^ du\\&= 2K_1^p \frac\Gamma\left(\frac\right)\\&= 2K_1^p \Gamma\left(\frac+1\right).\end

(3)\implies(2)

By the Taylor series e^x = 1 + \sum_^\infty \frac,\begin\operatorname[\exp{(\lambda X^2)}] &= 1 + \sum_^\infty \frac\\&\leq 1 + \sum_^\infty \frac\\&= 1 + 2 \sum_^\infty \lambda^p K_3^\\&= 2 \sum_^\infty \lambda^p K_3^-1\\&= \frac-1 \quad\text\lambda K_3^2 <1,\endwhich is less than or equal to

2

for

λ\leq

1
2
3K
3
. Let

K2\geq

1
2
3

K3

, then \operatorname[\exp{(X^2/K_2^2)}] \leq 2.

(2)\implies(1)

By Markov's inequality,\operatorname(|X|\geq t) = \operatorname\left(\exp\left(\frac\right) \geq \exp\left(\frac\right) \right) \leq \frac \leq 2 \exp\left(-\frac\right).

(3)\iff(4)

by asymptotic formula for gamma function:

\Gamma(p/2+1)\sim\sqrt{\pip}\left(

p
2e

\right)p/2

.

From the proof, we can extract a cycle of three inequalities:

\operatorname{P}(|X|\geqt)\leq2\exp{(-t2/K2)}

, then

\operatorname{E}|X|p\leq2Kp\Gamma\left(

p
2

+1\right)

for all

p\geq1

.

\operatorname{E}|X|p\leq2Kp\Gamma\left(

p
2

+1\right)

for all

p\geq1

, then

\|X

\|
\psi2

\leq

1
2
3

K

.

\|X

\|
\psi2

\leqK

, then

\operatorname{P}(|X|\geqt)\leq2\exp{(-t2/K2)}

.

In particular, the constant

K

provided by the definitions are the same up to a constant factor, so we can say that the definitions are equivalent up to a constant independent of

X

.

Similarly, because up to a positive multiplicative constant,

\Gamma(p/2+1)=pp/2 x ((2e)-1/2p1/2p)p

for all

p\geq1

, the definitions (3) and (4) are also equivalent up to a constant.

Basic properties

Proposition.

X

is subgaussian, and

k>0

, then
\|kX\|
\psi2

=k

\|X\|
\psi2
and

\|kX\|vp=k\|X\|vp

.

X,Y

are subgaussian, then
2
\|X+Y\|
vp

\leq(\|X\|vp+\|Y\|vp)2

.Proposition. (Chernoff bound) If

X

is subgaussian, then

Pr(X\geqt)\leq

-t2
2
2\|X\|
vp
e
for all

t\geq0

.

Definition.

X\lesssimX'

means that

X\leqCX'

, where the positive constant

C

is independent of

X

and

X'

.

Proposition. If

X

is subgaussian, then

\|X-

E[X]\|
\psi2

\lesssim

\|X\|
\psi2
.

Proof. By triangle inequality,

\|X-

E[X]\|
\psi2

\leq

\|X\|
\psi2

+

\|E[X]\|
\psi2
. Now we have
\|E[X]\|
\psi2

=\sqrt{ln2}|E[X]|\leq\sqrt{ln2}E[|X|]\simE[|X|]

. By the equivalence of definitions (2) and (4) of subgaussianity, given above, we have

E[|X|]\lesssim

\|X\|
\psi2
.

Proposition. If

X,Y

are subgaussian and independent, then
2
\|X+Y\|
vp

\leq

2
\|X\|
vp

+

2
\|Y\|
vp
.

Proof. If independent, then use that the cumulant of independent random variables is additive. That is,

ln\operatorname{E}[et(X+Y)]=ln\operatorname{E}[etX]+ln\operatorname{E}[etY]

.

If not independent, then by Hölder's inequality, for any

1/p+1/q=1

we haveE[e^{t(X+Y)}] = \|e^\|_1 \leq e^Solving the optimization problem

\begin{cases} minp

2
\|X\|
vp

+q

2
\|Y\|
vp

\\ 1/p+1/q=1 \end{cases}

, we obtain the result.

Corollary. Linear sums of subgaussian random variables are subgaussian.

Strictly subgaussian

Expanding the cumulant generating function:\frac 12 s^2 t^2 \geq \ln \operatorname[e^{tX}] = \frac 12 \mathrm[X] t^2 + \kappa_3 t^3 + \cdotswe find that

Var[X]\leq

2
\|X\|
vp
. At the edge of possibility, we define that a random variable

X

satisfying
2
Var[X]=\|X\|
vp
is called strictly subgaussian.

Properties

Theorem.[2] Let

X

be a subgaussian random variable with mean zero. If all zeros of its characteristic function are real, then

X

is strictly subgaussian.

Corollary. If

X1,...,Xn

are independent and strictly subgaussian, then any linear sum of them is strictly subgaussian.

Examples

By calculating the characteristic functions, we can show that some distributions are strictly subgaussian: symmetric uniform distribution, symmetric Bernoulli distribution.

Since a symmetric uniform distribution is strictly subgaussian, its convolution with itself is strictly subgaussian. That is, the symmetric triangular distribution is strictly subgaussian.

Since the symmetric Bernoulli distribution is strictly subgaussian, any symmetric Binomial distribution is strictly subgaussian.

Examples

X\
_

\

X\_^2strictly subgaussian?
gaussian distribution

lN(0,1)

\sqrt{8/3}

1

Yes
mean-zero Bernoulli distribution

p\deltaq+q\delta-p

solution to
(q/t)2
pe

+

(p/t)2
qe

=2

p-q
2(logp-logq)
Iff

p=0,1/2,1

symmetric Bernoulli distribution
12
\delta

1/2+

12
\delta

-1/2

1
2\sqrt{ln2
}

1/4

Yes
uniform distribution

U(0,1)

solution to
1
\int
0
x2/t2
e

dx=2

, approximately 0.7727

1/3

Yes
arbitrary distribution on interval

[a,b]

\leq\left(

b-a
2

\right)2

The optimal variance proxy

\Vert

2
X\Vert
vp
is known for many standard probability distributions, including the beta, Bernoulli, Dirichlet, Kumaraswamy, triangular, truncated Gaussian, and truncated exponential.

Bernoulli distribution

Let

p+q=1

be two positive numbers. Let

X

be a centered Bernoulli distribution

p\deltaq+q\delta-p

, so that it has mean zero, then

\Vert

2
X\Vert
vp

=

p-q
2(logp-logq)
.[3] Its subgaussian norm is

t

where

t

is the unique positive solution to
(q/t)2
pe

+

(p/t)2
qe

=2

.

Let

X

be a random variable with symmetric Bernoulli distribution (or Rademacher distribution). That is,

X

takes values

-1

and

1

with probabilities

1/2

each. Since

X2=1

, it follows that\Vert X \Vert_ = \inf\left\ = \inf\left\=\frac, and hence

X

is a subgaussian random variable.

Bounded distributions

Bounded distributions have no tail at all, so clearly they are subgaussian.

If

X

is bounded within the interval

[a,b]

, Hoeffding's lemma states that

\Vert

2
X\Vert
vp

\leq\left(

b-a
2

\right)2

. Hoeffding's inequality is the Chernoff bound obtained using this fact.

Convolutions

Since the sum of subgaussian random variables is still subgaussian, the convolution of subgaussian distributions is still subgaussian. In particular, any convolution of the normal distribution with any bounded distribution is subgaussian.

Mixtures

Given subgaussian distributions

X1,X2,...,Xn

, we can construct an additive mixture

X

as follows: first randomly pick a number

i\in\{1,2,...,n\}

, then pick

Xi

.

Since

\operatorname{E}\left[\exp{\left(X2
c2

\right)}\right]=\sumipi\operatorname{E}\left[\exp{\left(

2
X
i
c2

\right)}\right]

we have
\|X\|
\psi2

\leqmaxi\|Xi\|

\psi2
, and so the mixture is subgaussian.

In particular, any gaussian mixture is subgaussian.

More generally, the mixture of infinitely many subgaussian distributions is also subgaussian, if the subgaussian norm has a finite supremum:

\|X\|
\psi2

\leq\supi\|Xi\|

\psi2
.

Subgaussian random vectors

So far, we have discussed subgaussianity for real-valued random variables. We can also define subgaussianity for random vectors. The purpose of subgaussianity is to make the tails decay fast, so we generalize accordingly: a subgaussian random vector is a random vector where the tail decays fast.

Let

X

be a random vector taking values in

\Rn

.

Define.

\|X\|
\psi2

:=

\sup
v\inSn-1

\|vT

X\|
\psi2
, where

Sn-1

is the unit sphere in

\Rn

.

X

is subgaussian iff
\|X\|
\psi2

<infty

.

Theorem. (Theorem 3.4.6) For any positive integer

n

, the uniformly distributed random vector

X\simU(\sqrt{n}Sn-1)

is subgaussian, with
\|X\|
\psi2

\lesssim{}1

.

This is not so surprising, because as

n\toinfty

, the projection of

U(\sqrt{n}Sn-1)

to the first coordinate converges in distribution to the standard normal distribution.

Maximum inequalities

Proposition. If

X1,...,Xn

are mean-zero subgaussians, with

\|Xi

2
\|
vp

\leq\sigma2

, then for any

\delta>0

, we have

max(X1,...,Xn)\leq\sigma\sqrt{2ln

n
\delta
} with probability

\geq1-\delta

.

Proof. By the Chernoff bound,

Pr(Xi\geq\sigma\sqrt{2ln(n/\delta)})\leq\delta/n

. Now apply the union bound.

Proposition. (Exercise 2.5.10) If

X1,X2,...

are subgaussians, with

\|Xi

\|
\psi2

\leqK

, then E\left[\sup_n \frac{|X_n|}{\sqrt{1+\ln n}}\right] \lesssim K, \quad E\left[\max_{1 \leq n \leq N} |X_n|\right] \lesssim K \sqrtFurther, the bound is sharp, since when

X1,X2,...

are IID samples of

lN(0,1)

we have

E\left[max1|Xn|\right]\gtrsim\sqrt{lnN}

.[4]

[5]

Theorem. (over a finite set) If

X1,...,Xn

are subgaussian, with

\|Xi

2
\|
vp

\leq\sigma2

, then\beginE[\max_i (X_i - E[X_i])] \leq \sigma\sqrt, &\quad P(\max_i X_i > t) \leq n e^, \\E[\max_i |X_i - E[X_i]|] \leq \sigma\sqrt,&\quadP(\max_i |X_i| > t) \leq 2 n e^\endTheorem. (over a convex polytope) Fix a finite set of vectors

v1,...,vn

. If

X

is a random vector, such that each

\|

T
v
i

X

2
\|
vp

\leq\sigma2

, then the above 4 inequalities hold, with
max
v\inconv(v1,...,vn)

vTX

replacing

maxiXi

.

Here,

conv(v1,...,vn)

is the convex polytope spanned by the vectors

v1,...,vn

.

Theorem. (over a ball) If

X

is a random vector in

\Rd

, such that

\|vT

2
X\|
vp

\leq\sigma2

for all

v

on the unit sphere

S

, then E[\max_{v \in S} v^T X] = E[\max_{v \in S} |v^T X|] \leq 4\sigma \sqrt For any

\delta>0

, with probability at least

1-\delta

,\max_ v^T X = \max_ | v^T X | \leq 4 \sigma \sqrt+2 \sigma \sqrt

Inequalities

Theorem. (Theorem 2.6.1) There exists a positive constant

C

such that given any number of independent mean-zero subgaussian random variables

X1,...,Xn

, \left\|\sum_^n X_i\right\|_^2 \leq C \sum_^n\left\|X_i\right\|_^2Theorem. (Hoeffding's inequality) (Theorem 2.6.3) There exists a positive constant

c

such that given any number of independent mean-zero subgaussian random variables

X1,...,XN

,\mathbb\left(\left|\sum_^N X_i\right| \geq t\right) \leq 2 \exp \left(-\frac\right)\quad \forall t > 0Theorem. (Bernstein's inequality) (Theorem 2.8.1) There exists a positive constant

c

such that given any number of independent mean-zero subexponential random variables

X1,...,XN

,\mathbb\left(\left|\sum_^N X_i\right| \geq t\right) \leq 2 \exp \left(-c \min \left(\frac, \frac\right)\right)Theorem. (Khinchine inequality) (Exercise 2.6.5) There exists a positive constant

C

such that given any number of independent mean-zero variance-one subgaussian random variables

X1,...,XN

, any

p\geq2

, and any

a1,...,aN\in\R

,\left(\sum_^N a_i^2\right)^ \leq\left\|\sum_^N a_i X_i\right\|_ \leq C K \sqrt\left(\sum_^N a_i^2\right)^

Hanson-Wright inequality

The Hanson-Wright inequality states that if a random vector

X

is subgaussian in a certain sense, then any quadratic form

A

of this vector,

XTAX

, is also subgaussian/subexponential. Further, the upper bound on the tail of

XTAX

, is uniform.

A weak version of the following theorem was proved in (Hanson, Wright, 1971).[6] There are many extensions and variants. Much like the central limit theorem, the Hanson-Wright inequality is more a cluster of theorems with the same purpose, than a single theorem. The purpose is to take a subgaussian vector and uniformly bound its quadratic forms.

Theorem.[7] [8] There exists a constant

c

, such that:

Let

n

be a positive integer. Let

X1,...,Xn

be independent random variables, such that each satisfies

E[Xi]=0

. Combine them into a random vector

X=(X1,...,Xn)

. For any

n x n

matrix

A

, we haveP(|X^T AX - E[X^TAX]| > t) \leq \max\left(2 e^, 2 e^ \right) = 2 \exp \left[-c \min \left(\frac{t^2}{K^4\|A\|_F^2}, \frac{t}{K^2\|A\|}\right)\right]where

K=maxi\|Xi\|

\psi2
, and

\|A\|F=\sqrt{\sumij

2}
A
ij
is the Frobenius norm of the matrix, and

\|A\|=

max
\|x\|2=1

\|Ax\|2

is the operator norm of the matrix.

In words, the quadratic form

XTAX

has its tail uniformly bounded by an exponential, or a gaussian, whichever is larger.

In the statement of the theorem, the constant

c

is an "absolute constant", meaning that it has no dependence on

n,X1,...,Xn,A

. It is a mathematical constant much like pi and e.

Consequences

Theorem (subgaussian concentration). There exists a constant

c

, such that:

Let

n,m

be positive integers. Let

X1,...,Xn

be independent random variables, such that each satisfies

E[Xi]=0,

2]
E[X
i

=1

. Combine them into a random vector

X=(X1,...,Xn)

. For any

m x n

matrix

A

, we haveP(| \| AX\|_2 - \|A\|_F | > t) \leq 2 e^In words, the random vector

AX

is concentrated on a spherical shell of radius

\|A\|F

, such that

\|AX\|2-\|A\|F

is subgaussian, with subgaussian norm

\leq\sqrt{3/c}\|A\|K2

.

See also

References

Notes and References

  1. Book: Vershynin, R. . High-dimensional probability: An introduction with applications in data science . Cambridge University Press . 2018 . Cambridge.
  2. Bobkov . S. G. . Strictly subgaussian probability distributions . 2023-08-03 . 2308.01749 . Chistyakov . G. P. . Götze . F.. math.PR .
  3. Bobkov . S. G. . Strictly subgaussian probability distributions . 2023-08-03 . 2308.01749 . Chistyakov . G. P. . Götze . F.. math.PR .
  4. Kamath, Gautam. "Bounds on the expectation of the maximum of samples from a gaussian." (2015)
  5. Web site: MIT 18.S997 Spring 2015 High-Dimensional Statistics, Chapter 1. Sub-Gaussian Random Variables . 2024-04-03 . MIT OpenCourseWare . en.
  6. Hanson . D. L. . Wright . F. T. . 1971 . A Bound on Tail Probabilities for Quadratic Forms in Independent Random Variables . The Annals of Mathematical Statistics . 42 . 3 . 1079–1083 . 10.1214/aoms/1177693335 . 2240253 . 0003-4851. free .
  7. Rudelson . Mark . Vershynin . Roman . January 2013 . Hanson-Wright inequality and sub-gaussian concentration . Electronic Communications in Probability . 18 . none . 1–9 . 10.1214/ECP.v18-2865 . 1083-589X. 1306.2872 .
  8. Book: Vershynin, Roman . High-Dimensional Probability: An Introduction with Applications in Data Science . 2018 . Cambridge University Press . 978-1-108-41519-4 . Cambridge Series in Statistical and Probabilistic Mathematics . Cambridge . 6. Quadratic Forms, Symmetrization, and Contraction . 127–146 . 10.1017/9781108231596.009 . https://doi.org/10.1017/9781108231596.009.