In computational complexity theory and cryptography, averaging argument is a standard argument for proving theorems. It usually allows us to convert probabilistic polynomial-time algorithms into non-uniform polynomial-size circuits.
Example: If every person likes at least 1/3 of the books in a library, then the library has a book, which at least 1/3 of people like.
Proof: Suppose there are
N
B
B/3
M=(NB)/3
N/3
N/3
B
(NB)/3
M
\scriptstyle\blacksquare
Let X and Y be sets, let p be a predicate on X × Y and let f be a real number in the interval [0, 1]. If for each x in X and at least f |Y| of the elements y in Y satisfy p(x, y), then there exists a y in Y such that there exist at least f |X| elements x in X that satisfy p(x, y).
There is another definition, defined using the terminology of probability theory.[1]
Let
f
C
C(x,y)=f(x)
\rho
x
y
Y
\{0,1\}m
y0\in\{0,1\}m
\Prx[C(x,y0)=f(x)]\ge\rho
Indeed, for every
y
py
\Prx[C(x,y)=f(x)]
\Prx,y[C(x,y)=f(x)]=Ey[py]
and then this reduces to the claim that for every random variable
Z
E[Z]\ge\rho
\Pr[Z\ge\rho]>0
E[Z]
Z
\rho
\rho
This argument has wide use in complexity theory (e.g. proving
BPP\subsetneqP/poly