In physics, the Hanbury Brown and Twiss (HBT) effect is any of a variety of correlation and anti-correlation effects in the intensities received by two detectors from a beam of particles. HBT effects can generally be attributed to the wave–particle duality of the beam, and the results of a given experiment depend on whether the beam is composed of fermions or bosons. Devices which use the effect are commonly called intensity interferometers and were originally used in astronomy, although they are also heavily used in the field of quantum optics.
In 1954, Robert Hanbury Brown and Richard Q. Twiss introduced the intensity interferometer concept to radio astronomy for measuring the tiny angular size of stars, suggesting that it might work with visible light as well.[1] Soon after they successfully tested that suggestion: in 1956 they published an in-lab experimental mockup using blue light from a mercury-vapor lamp,[2] and later in the same year, they applied this technique to measuring the size of Sirius.[3] In the latter experiment, two photomultiplier tubes, separated by a few meters, were aimed at the star using crude telescopes, and a correlation was observed between the two fluctuating intensities. Just as in the radio studies, the correlation dropped away as they increased the separation (though over meters, instead of kilometers), and they used this information to determine the apparent angular size of Sirius.
This result was met with much skepticism in the physics community. The radio astronomy result was justified by Maxwell's equations, but there were concerns that the effect should break down at optical wavelengths, since the light would be quantised into a relatively small number of photons that induce discrete photoelectrons in the detectors. Many physicists worried that the correlation was inconsistent with the laws of thermodynamics. Some even claimed that the effect violated the uncertainty principle. Hanbury Brown and Twiss resolved the dispute in a neat series of articles (see References below) that demonstrated, first, that wave transmission in quantum optics had exactly the same mathematical form as Maxwell's equations, albeit with an additional noise term due to quantisation at the detector, and second, that according to Maxwell's equations, intensity interferometry should work. Others, such as Edward Mills Purcell immediately supported the technique, pointing out that the clumping of bosons was simply a manifestation of an effect already known in statistical mechanics. After a number of experiments, the whole physics community agreed that the observed effect was real.
The original experiment used the fact that two bosons tend to arrive at two separate detectors at the same time. Morgan and Mandel used a thermal photon source to create a dim beam of photons and observed the tendency of the photons to arrive at the same time on a single detector. Both of these effects used the wave nature of light to create a correlation in arrival time – if a single photon beam is split into two beams, then the particle nature of light requires that each photon is only observed at a single detector, and so an anti-correlation was observed in 1977 by H. Jeff Kimble.[4] Finally, bosons have a tendency to clump together, giving rise to Bose–Einstein correlations, while fermions due to the Pauli exclusion principle, tend to spread apart, leading to Fermi–Dirac (anti)correlations. Bose–Einstein correlations have been observed between pions, kaons and photons, and Fermi–Dirac (anti)correlations between protons, neutrons and electrons. For a general introduction in this field, see the textbook on Bose–Einstein correlations by Richard M. Weiner.[5] A difference in repulsion of Bose–Einstein condensate in the "trap-and-free fall" analogy of the HBT effect[6] affects comparison.
Also, in the field of particle physics, Gerson Goldhaber et al. performed an experiment in 1959 in Berkeley and found an unexpected angular correlation among identical pions, discovering the ρ0 resonance, by means of
\rho0\to\pi-\pi+
The HBT effect can, in fact, be predicted solely by treating the incident electromagnetic radiation as a classical wave. Suppose we have a monochromatic wave with frequency
\omega
E(t)
2\pi/\omega
Since the detectors are separated, say the second detector gets the signal delayed by a time
\tau
\phi=\omega\tau
E1(t)=E(t)\sin(\omegat),
E2(t)=E(t-\tau)\sin(\omegat-\phi).
The intensity recorded by each detector is the square of the wave amplitude, averaged over a timescale that is long compared to the wave period
2\pi/\omega
E(t)
\begin{align} i1(t)&=
2} | |
\overline{E | |
1(t) |
=\overline{E(t)2\sin2(\omegat)}=\tfrac{1}{2}E(t)2,\\ i2(t)&=
2} | |
\overline{E | |
2(t) |
=\overline{E(t-\tau)2\sin2(\omegat-\phi)}=\tfrac{1}{2}E(t-\tau)2, \end{align}
The correlation function
\langlei1i2\rangle(\tau)
\begin{align} \langlei1i2\rangle(\tau)&=\limT
1 | |
T |
T | |
\int\limits | |
0 |
i1(t)i2(t)dt\\ &=\limT
1 | |
T |
T | |
\int\limits | |
0 |
\tfrac{1}{4}E(t)2E(t-\tau)2dt. \end{align}
Most modern schemes actually measure the correlation in intensity fluctuations at the two detectors, but it is not too difficult to see that if the intensities are correlated, then the fluctuations
\Deltai=i-\langlei\rangle
\langlei\rangle
\begin{align} \langle\Deltai1\Deltai2\rangle&=\langle(i1-\langlei1\rangle)(i2-\langlei2\rangle)\rangle=\langlei1i2\rangle-\langlei1\langlei2\rangle\rangle-\langlei2\langlei1\rangle\rangle+\langlei1\rangle\langlei2\rangle\\ &=\langlei1i2\rangle-\langlei1\rangle\langlei2\rangle. \end{align}
In the particular case that
E(t)
E0
\deltaE\sin(\Omegat)
\begin{align} i1(t)&=\tfrac{1}{2}
2 | |
E | |
0 |
+E0\deltaE\sin(\Omegat)+l{O}(\deltaE2),\\ i2(t)&=\tfrac{1}{2}
2 | |
E | |
0 |
+E0\deltaE\sin(\Omegat-\Phi)+l{O}(\deltaE2), \end{align}
\Phi=\Omega\tau
l{O}(\deltaE2)
(\deltaE)2
The correlation function of these two intensities is then
\begin{align} \langle\Deltai1\Deltai2\rangle(\tau)&=\limT
(E0\deltaE)2 | |
T |
T | |
\int\limits | |
0 |
\sin(\Omegat)\sin(\Omegat-\Phi)dt\\ &=\tfrac{1}{2}(E0\deltaE)2\cos(\Omega\tau), \end{align}
\tau
The above discussion makes it clear that the Hanbury Brown and Twiss (or photon bunching) effect can be entirely described by classical optics. The quantum description of the effect is less intuitive: if one supposes that a thermal or chaotic light source such as a star randomly emits photons, then it is not obvious how the photons "know" that they should arrive at a detector in a correlated (bunched) way. A simple argument suggested by Ugo Fano in 1961[9] captures the essence of the quantum explanation. Consider two points
a
b
A
B
a
A
b
B
a
B
b
A
\langleA|a\rangle\langleB|b\rangle
\langleB|a\rangle\langleA|b\rangle
a,b
AB
Fano's explanation nicely illustrates the necessity of considering two-particle amplitudes, which are not as intuitive as the more familiar single-particle amplitudes used to interpret most interference effects. This may help to explain why some physicists in the 1950s had difficulty accepting the Hanbury Brown and Twiss result. But the quantum approach is more than just a fancy way to reproduce the classical result: if the photons are replaced by identical fermions such as electrons, the antisymmetry of wave functions under exchange of particles renders the interference destructive, leading to zero joint detection probability for small detector separations. This effect is referred to as antibunching of fermions.[10] The above treatment also explains photon antibunching:[11] if the source consists of a single atom, which can only emit one photon at a time, simultaneous detection in two closely spaced detectors is clearly impossible. Antibunching, whether of bosons or of fermions, has no classical wave analog.
From the point of view of the field of quantum optics, the HBT effect was important to lead physicists (among them Roy J. Glauber and Leonard Mandel) to apply quantum electrodynamics to new situations, many of which had never been experimentally studied, and in which classical and quantum predictions differ.