In quantum mechanics, information theory, and Fourier analysis, the entropic uncertainty or Hirschman uncertainty is defined as the sum of the temporal and spectral Shannon entropies. It turns out that Heisenberg's uncertainty principle can be expressed as a lower bound on the sum of these entropies. This is stronger than the usual statement of the uncertainty principle in terms of the product of standard deviations.
In 1957, Hirschman considered a function f and its Fourier transform g such that
g(y) ≈
infty | |
\int | |
-infty |
\exp(-2\piixy)f(x)dx, f(x) ≈
infty | |
\int | |
-infty |
\exp(2\piixy)g(y)dy~,
infty | |
\int | |
-infty |
|f(x)|2dx=
infty | |
\int | |
-infty |
|g(y)|2dy=1~.
H(|f|2)+H(|g|2)\equiv-
infty | |
\int | |
-infty |
|f(x)|2log|f(x)|2dx-
infty | |
\int | |
-infty |
|g(y)|2log|g(y)|2dy\ge0.
A tighter bound,was conjectured by Hirschman and Everett,[1] proven in 1975 by W. Beckner and in the same year interpreted as a generalized quantum mechanical uncertainty principle by Białynicki-Birula and Mycielski.The equality holds in the case of Gaussian distributions.[2] Note, however, that the above entropic uncertainty function is distinctly different from the quantum Von Neumann entropy represented in phase space.
The proof of this tight inequality depends on the so-called (q, p)-norm of the Fourier transformation. (Establishing this norm is the most difficult part of the proof.)
From this norm, one is able to establish a lower bound on the sum of the (differential) Rényi entropies,, where, which generalize the Shannon entropies. For simplicity, we consider this inequality only in one dimension; the extension to multiple dimensions is straightforward and can be found in the literature cited.
The (q, p)-norm of the Fourier transform is defined to be[3]
\|lF\|q,p=
\sup | |
f\inLp(R) |
\|lFf\|q | |
\|f\|p |
,
1<p\le2~,
1 | |
p |
+
1 | |
q |
=1.
In 1961, Babenko[4] found this norm for even integer values of q. Finally, in 1975,using Hermite functions as eigenfunctions of the Fourier transform, Beckner proved that the value of this norm (in one dimension) for all q ≥ 2 is
\|lF\|q,p=\sqrt{p1/p/q1/q
\|lFf\|q\le\left(p1/p/q1/q\right)1/2\|f\|p.
From this inequality, an expression of the uncertainty principle in terms of the Rényi entropy can be derived.[3] [5]
Letting
g=lFf
\left(\intR|g(y)|2\betady\right)1/2\beta\le
(2\alpha)1/4\alpha | |
(2\beta)1/4\beta |
\left(\intR|f(x)|2\alphadx\right)1/2\alpha.
1\beta | |
log\left(\int |
R|g(y)|2\betady\right) \le
1 | log | |
2 |
(2\alpha)1/\alpha | |
(2\beta)1/\beta |
+
1\alpha | |
log |
\left(\intR|f(x)|2\alphadx\right).
Multiplying both sides by
\beta | =- | |
1-\beta |
\alpha | |
1-\alpha |
1 | |
1-\beta |
log\left(\intR|g(y)|2\betady\right) \ge
| ||||
- |
1 | |
1-\alpha |
log\left(\intR|f(x)|2\alphadx\right)~.
Rearranging terms, finally yields an inequality in terms of the sum of the Rényi entropies,
1 | |
1-\alpha |
log\left(\intR|f(x)|2\alphadx\right) +
1 | |
1-\beta |
log\left(\intR|g(y)|2\betady\right) \ge
| ||||
; |
2) | |
H | |
\alpha(|f| |
+
2) | |
H | |
\beta(|g| |
\ge
1 | \left( | |
2 |
log\alpha | + | |
\alpha-1 |
log\beta | |
\beta-1 |
\right)-log2~.
Note that this inequality is symmetric with respect to and : One no longer need assume that ; only that they are positive and not both one, and that 1/α + 1/β = 2. To see this symmetry, simply exchange the rôles of i and −i in the Fourier transform.
Taking the limit of this last inequality as α, β → 1 yields the less general Shannon entropy inequality,
H(|f|2)+H(|g|2)\gelog
e | |
2, rm{where} g(y) |
≈ \intRe-2\pif(x)dx~,
The constant will be different, though, for a different normalization of the Fourier transform, (such as is usually used in physics, with normalizations chosen so that ħ=1), i.e.,
H(|f|2)+H(|g|2)\gelog(\pie) rm{for} g(y) ≈
1{\sqrt{2\pi}}\int | |
R |
e-ixyf(x)dx~.
The Gaussian or normal probability distribution plays an important role in the relationship between variance and entropy: it is a problem of the calculus of variations to show that this distribution maximizes entropy for a given variance, and at the same time minimizes the variance for a given entropy. In fact, for any probability density function
\phi
H(\phi)\lelog\sqrt{2\pieV(\phi)},
Moreover, the Fourier transform of a Gaussian probability amplitude function is also Gaussian—and the absolute squares of both of these are Gaussian, too. This can then be used to derive the usual Robertson variance uncertainty inequality from the above entropic inequality, enabling the latter to be tighter than the former. That is (for ħ=1), exponentiating the Hirschman inequality and using Shannon's expression above,
1/2\le\exp(H(|f|2)+H(|g|2))/(2e\pi)\le\sqrt{V(|f|2)V(|g|2)}~.
Hirschman explained that entropy—his version of entropy was the negative of Shannon's—is a "measure of the concentration of [a probability distribution] in a set of small measure." Thus a low or large negative Shannon entropy means that a considerable mass of the probability distribution is confined to a set of small measure.
Note that this set of small measure need not be contiguous; a probability distribution can have several concentrations of mass in intervals of small measure, and the entropy may still be low no matter how widely scattered those intervals are. This is not the case with the variance: variance measures the concentration of mass about the mean of the distribution, and a low variance means that a considerable mass of the probability distribution is concentrated in a contiguous interval of small measure.
To formalize this distinction, we say that two probability density functions
\phi1
\phi2
\forall\delta>0,\mu\{x\inR|\phi1(x)\ge\delta\}=\mu\{x\inR|\phi2(x)\ge\delta\},