McDiarmid's inequality explained

In probability theory and theoretical computer science, McDiarmid's inequality (named after Colin McDiarmid [1]) is a concentration inequality which bounds the deviation between the sampled value and the expected value of certain functions when they are evaluated on independent random variables. McDiarmid's inequality applies to functions that satisfy a bounded differences property, meaning that replacing a single argument to the function while leaving all other arguments unchanged cannot cause too large of a change in the value of the function.

Statement

A function

f:l{X}1 x l{X}2 x x l{X}nR

satisfies the bounded differences property if substituting the value of the

i

th coordinate

xi

changes the value of

f

by at most

ci

. More formally, if there are constants

c1,c2,...,cn

such that for all

i\in[n]

, and all

x1\inl{X}1,x2\inl{X}2,\ldots,xn\inl{X}n

,
\sup
xi'\inl{X

i}\left|f(x1,...,xi-1,xi,xi+1,\ldots,xn)-f(x1,...,xi-1,xi',xi+1,\ldots,xn)\right|\leqci.

Extensions

Unbalanced distributions

A stronger bound may be given when the arguments to the function are sampled from unbalanced distributions, such that resampling a single argument rarely causes a large change to the function value.

This may be used to characterize, for example, the value of a function on graphs when evaluated on sparse random graphs and hypergraphs, since in a sparse random graph, it is much more likely for any particular edge to be missing than to be present.

Differences bounded with high probability

McDiarmid's inequality may be extended to the case where the function being analyzed does not strictly satisfy the bounded differences property, but large differences remain very rare.

There exist stronger refinements to this analysis in some distribution-dependent scenarios,[2] such as those that arise in learning theory.

Sub-Gaussian and sub-exponential norms

Let the

k

th centered conditional version of a function

f

be

fk(X)(x):=f(x1,\ldots,xk-1,Xk,xk+1,\ldots,xn)-

E
X'k

f(x1,\ldots,xk-1,X'k,xk+1,\ldots,xn),

so that

fk(X)

is a random variable depending on random values of

x1,\ldots,xk-1,xk+1,\ldots,xn

.

Bennett and Bernstein forms

Refinements to McDiarmid's inequality in the style of Bennett's inequality and Bernstein inequalities are made possible by defining a variance term for each function argument. Let

\begin{align} B&:=maxk

\sup
x1,...,xk-1,xk+1,...,xn

\left|f(x1,...,xk-1,Xk,xk+1,...,xn)-

E
Xk

f(x1,...,xk-1,Xk,xk+1,...,xn)\right|,\\ Vk&:=

\sup
x1,...,xk-1,xk+1,...,xn
E
Xk

\left(f(x1,...,xk-1,Xk,xk+1,...,xn)-

E
Xk

f(x1,...,xk-1,Xk,xk+1,...,

2,
x
n)\right)

\\ \tilde\sigma2&:=

n
\sum
k=1

Vk. \end{align}

Proof

The following proof of McDiarmid's inequality constructs the Doob martingale tracking the conditional expected value of the function as more and more of its arguments are sampled and conditioned on, and then applies a martingale concentration inequality (Azuma's inequality).An alternate argument avoiding the use of martingales also exists, taking advantage of the independence of the function arguments to provide a Chernoff-bound-like argument.

For better readability, we will introduce a notational shorthand:

zi

will denote

zi,...,zj

for any

z\inl{X}n

and integers

1\lei\lej\len

, so that, for example,

f(X1,y,x(i+1)):=f(X1,\ldots,Xi-1,y,xi+1,\ldots,xn).

Pick any

x1',x2',\ldots,xn'

. Then, for any

x1,x2,\ldots,xn

, by triangle inequality,

\begin{align} &|f(x1)-f(x'1)|\\[6pt] \leq{}&|f(x1)-f(x'1,xn)|+cn\\ \leq{}&|f(x1)-f(x'1,x(n-1))|+cn-1+cn\\ \leq{}&\ldots\\ \leq{}&

n
\sum
i=1

ci, \end{align}

and thus

f

is bounded.

Since

f

is bounded, define the Doob martingale

\{Zi\}

(each

Zi

being a random variable depending on the random values of

X1,\ldots,Xi

) as

Zi:=E[f(X1)\midX1]

for all

i\geq1

and

Z0:=E[f(X1)]

, so that

Zn=f(X1)

.

Now define the random variables for each

i

\begin{align} Ui&:=\supxi}E[f(X1,x,X(i+1))\midX1,Xi=x]-[f(X1,Xi\rightharpoondown)\midX1],\\ Li&:=infxi}E[f(X1,x,X(i+1))\midX1,Xi=x]-[f(X1,Xi\rightharpoondown)\midX1].\\ \end{align}

Since

Xi,\ldots,Xn

are independent of each other, conditioning on

Xi=x

does not affect the probabilities of the other variables, so these are equal to the expressions

\begin{align} Ui&=\supxi}E[f(X1,x,X(i+1))-f(X1,Xi\rightharpoondown)\midX1],\\ Li&=infxi}E[f(X1,x,X(i+1))-f(X1,Xi\rightharpoondown)\midX1].\\ \end{align}

Note that

Li\leqZi-Zi-1\leqUi

. In addition,

\begin{align} Ui-Li&=\supu\ini,\ell\inl{X}i} E[f(X1,u,X(i+1))\midX1] -E[f(X1,\ell,X(i+1))\midX1]\\[6pt] &=\supu\ini,\ell\inl{X}i} E[f(X1,u,X(i+1))-f(X1,l,X(i+1))\midX1]\\ &\leq

\sup
xu\inl{X

i,xl\inl{X}i} E[ci\midX1]\\[6pt] &\leqci \end{align}

Then, applying the general form of Azuma's inequality to

\left\{Zi\right\}

, we have

P(f(X1,\ldots,Xn)-E[f(X1,\ldots,Xn)]\geq\varepsilon) =\operatorname{P}(Zn-Z0\geq\varepsilon) \leq\exp\left(-

2\varepsilon2
n
\sum
2
c
i
i=1

\right).

The one-sided bound in the other direction is obtained by applying Azuma's inequality to

\left\{-Zi\right\}

and the two-sided bound follows from a union bound.

\square

Notes and References

  1. McDiarmid . Colin . On the method of bounded differences . Surveys in Combinatorics, 1989: Invited Papers at the Twelfth British Combinatorial Conference . 1989 . 148–188. 10.1017/CBO9781107359949.008 . 978-0-521-37823-9 .
  2. Wu . Xinxing . Zhang . Junping . Distribution-dependent concentration inequalities for tighter generalization bounds . Science China Information Sciences . April 2018 . 61 . 4 . 048105:1–048105:3 . 10.1007/s11432-017-9225-2 . 1607.05506 . 255199895 . 10 July 2022.