In mathematics, the Prékopa–Leindler inequality is an integral inequality closely related to the reverse Young's inequality, the Brunn–Minkowski inequality and a number of other important and classical inequalities in analysis. The result is named after the Hungarian mathematicians András Prékopa and László Leindler.[1] [2]
Let 0 < λ < 1 and let f, g, h : Rn → [0, +∞) be non-[[negative number|negative]] real-valued measurable functions defined on n-dimensional Euclidean space Rn. Suppose that these functions satisfy
for all x and y in Rn. Then
\|h\|1:=
\int | |
Rn |
h(x)dx\geq\left(
\int | |
Rn |
f(x)dx\right)1\left(
\int | |
Rn |
g(x)dx\right)λ=:\|
1-λ | |
f\| | |
1 |
\|
λ. | |
g\| | |
1 |
Recall that the essential supremum of a measurable function f : Rn → R is defined by
esssup | |
x\inRn |
f(x)=inf\left\{t\in[-infty,+infty]\midf(x)\leqtforalmostallx\inRn\right\}.
This notation allows the following essential form of the Prékopa–Leindler inequality: let 0 < λ < 1 and let f, g ∈ L1(Rn; [0, +∞)) be non-negative [[absolutely integrable function|absolutely integrable]] functions. Let
s(x)=
esssup | |
y\inRn |
f\left(
x-y | |
1-λ |
\right)1g\left(
y | |
λ |
\right)λ.
Then s is measurable and
\|s\|1\geq\|f
1-λ | |
\| | |
1 |
\|g
λ. | |
\| | |
1 |
The essential supremum form was given by Herm Brascamp and Elliott Lieb.[3] Its use can change the left side of the inequality. For example, a function g that takes the value 1 at exactly one point will not usually yield a zero left side in the "non-essential sup" form but it will always yield a zero left side in the "essential sup" form.
It can be shown that the usual Prékopa–Leindler inequality implies the Brunn–Minkowski inequality in the following form: if 0 < λ < 1 and A and B are bounded, measurable subsets of Rn such that the Minkowski sum (1 - λ)A + λB is also measurable, then
\mu\left((1-λ)A+λB\right)\geq\mu(A)1\mu(B)λ,
where μ denotes n-dimensional Lebesgue measure. Hence, the Prékopa–Leindler inequality can also be used[4] to prove the Brunn–Minkowski inequality in its more familiar form: if 0 < λ < 1 and A and B are non-empty, bounded, measurable subsets of Rn such that (1 - λ)A + λB is also measurable, then
\mu\left((1-λ)A+λB\right)1\geq(1-λ)\mu(A)1+λ\mu(B)1.
The Prékopa–Leindler inequality is useful in the theory of log-concave distributions, as it can be used to show that log-concavity is preserved by marginalization and independent summation of log-concave distributed random variables. Since, if
X,Y
f,g
X,Y
f\starg
X+Y
Suppose that H(x,y) is a log-concave distribution for (x,y) ∈ Rm × Rn, so that by definition we have
and let M(y) denote the marginal distribution obtained by integrating over x:
M(y)=
\int | |
Rm |
H(x,y)dx.
Let y1, y2 ∈ Rn and 0 < λ < 1 be given. Then equation satisfies condition with h(x) = H(x,(1 - λ)y1 + λy2), f(x) = H(x,y1) and g(x) = H(x,y2), so the Prékopa–Leindler inequality applies. It can be written in terms of M as
M((1-λ)y1+λy2)\geq
1-λ | |
M(y | |
1) |
λ, | |
M(y | |
2) |
which is the definition of log-concavity for M.
To see how this implies the preservation of log-convexity by independent sums, suppose that X and Y are independent random variables with log-concave distribution. Since the product of two log-concave functions is log-concave, the joint distribution of (X,Y) is also log-concave. Log-concavity is preserved by affine changes of coordinates, so the distribution of (X + Y, X − Y) is log-concave as well. Since the distribution of X+Y is a marginal over the joint distribution of (X + Y, X − Y), we conclude that X + Y has a log-concave distribution.
The Prékopa–Leindler inequality can be used to prove results about concentration of measure.
Theorem Let , and set . Let denote the standard Gaussian pdf, and its associated measure. Then .
The proof of this theorem goes by way of the following lemma:
Lemma In the notation of the theorem, .
This lemma can be proven from Prékopa–Leindler by taking and . To verify the hypothesis of the inequality, , note that we only need to consider , in which case . This allows us to calculate:
(2\pi)nf(x)g(x)=\exp(
d(x,A) | |
4 |
-||x||2/2-||y||2/2)\leq\exp(
||x-y||2 | |
4 |
-||x||2/2-||y||2/2)=\exp(-||
x+y | |
2 |
||2)=(2\pi)nh(
x+y | |
2 |
)2.
Since , the PL-inequality immediately gives the lemma.
To conclude the concentration inequality from the lemma, note that on , , so we have . Applying the lemma and rearranging proves the result.