A Bell test, also known as Bell inequality test or Bell experiment, is a real-world physics experiment designed to test the theory of quantum mechanics in relation to Albert Einstein's concept of local realism. Named for John Stewart Bell, the experiments test whether or not the real world satisfies local realism, which requires the presence of some additional local variables (called "hidden" because they are not a feature of quantum theory) to explain the behavior of particles like photons and electrons. The test empirically evaluates the implications of Bell's theorem., all Bell tests have found that the hypothesis of local hidden variables is inconsistent with the way that physical systems behave.[1]
Many types of Bell tests have been performed in physics laboratories, often with the goal of ameliorating problems of experimental design or set-up that could in principle affect the validity of the findings of earlier Bell tests. This is known as "closing loopholes in Bell tests".[1]
Bell inequality violations are also used in some quantum cryptography protocols, whereby a spy's presence is detected when Bell's inequalities cease to be violated.
See main article: Bell's theorem. The Bell test has its origins in the debate between Einstein and other pioneers of quantum physics, principally Niels Bohr. One feature of the theory of quantum mechanics under debate was the meaning of Heisenberg's uncertainty principle. This principle states that if some information is known about a given particle, there is some other information about it that is impossible to know. An example of this is found in observations of the position and the momentum of a given particle. According to the uncertainty principle, a particle's momentum and its position cannot simultaneously be determined with arbitrarily high precision.[2]
In 1935, Einstein, Boris Podolsky, and Nathan Rosen published a claim that quantum mechanics predicts that more information about a pair of entangled particles could be observed than Heisenberg's principle allowed, which would only be possible if information were travelling instantly between the two particles. This produces a paradox which came to be known as the "EPR paradox" after the three authors. It arises if any effect felt in one location is not the result of a cause that occurred in its past light cone, relative to its location. This action at a distance seems to violate causality, by allowing information between the two locations to travel faster than the speed of light. However, it is a common misconception to think that any information can be shared between two observers faster than the speed of light using entangled particles; the hypothetical information transfer here is between the particles. See no-communication theorem for further explanation.
Based on this, the authors concluded that the quantum wave function does not provide a complete description of reality. They suggested that there must be some local hidden variables at work in order to account for the behavior of entangled particles. In a theory of hidden variables, as Einstein envisaged it, the randomness and indeterminacy seen in the behavior of quantum particles would only be apparent. For example, if one knew the details of all the hidden variables associated with a particle, then one could predict both its position and momentum. The uncertainty that had been quantified by Heisenberg's principle would simply be an artifact of not having complete information about the hidden variables. Furthermore, Einstein argued that the hidden variables should obey the condition of locality: Whatever the hidden variables actually are, the behavior of the hidden variables for one particle should not be able to instantly affect the behavior of those for another particle far away. This idea, called the principle of locality, is rooted in intuition from classical physics that physical interactions do not propagate instantly across space. These ideas were the subject of ongoing debate between their proponents. In particular, Einstein himself did not approve of the way Podolsky had stated the problem in the famous EPR paper.[3] [4]
In 1964, John Stewart Bell proposed his famous theorem, which states that no physical theory of hidden local variables can ever reproduce all the predictions of quantum mechanics. Implicit in the theorem is the proposition that the determinism of classical physics is fundamentally incapable of describing quantum mechanics. Bell expanded on the theorem to provide what would become the conceptual foundation of the Bell test experiments.
A typical experiment involves the observation of particles, often photons, in an apparatus designed to produce entangled pairs and allow for the measurement of some characteristic of each, such as their spin. The results of the experiment could then be compared to what was predicted by local realism and those predicted by quantum mechanics.
In theory, the results could be "coincidentally" consistent with both. To address this problem, Bell proposed a mathematical description of local realism that placed a statistical limit on the likelihood of that eventuality. If the results of an experiment violate Bell's inequality, local hidden variables can be ruled out as their cause. Later researchers built on Bell's work by proposing new inequalities that serve the same purpose and refine the basic idea in one way or another.[5] [6] Consequently, the term "Bell inequality" can mean any one of a number of inequalities satisfied by local hidden-variables theories; in practice, many present-day experiments employ the CHSH inequality. All these inequalities, like the original devised by Bell, express the idea that assuming local realism places restrictions on the statistical results of experiments on sets of particles that have taken part in an interaction and then separated.
To date, all Bell tests have supported the theory of quantum physics, and not the hypothesis of local hidden variables. These efforts to experimentally validate violations of the Bell inequalities resulted in John Clauser, Alain Aspect, and Anton Zeilinger being awarded the 2022 Nobel Prize in Physics.[7]
In practice most actual experiments have used light, assumed to be emitted in the form of particle-like photons (produced by atomic cascade or spontaneous parametric down conversion), rather than the atoms that Bell originally had in mind. The property of interest is, in the best known experiments, the polarisation direction, though other properties can be used. Such experiments fall into two classes, depending on whether the analysers used have one or two output channels.
See main article: CHSH inequality.
The diagram shows a typical optical experiment of the two-channel kind for which Alain Aspect set a precedent in 1982.[8] Coincidences (simultaneous detections) are recorded, the results being categorised as '++', '+-', '-+' or '--' and corresponding counts accumulated.
Four separate subexperiments are conducted, corresponding to the four terms E(a, b) in the test statistic S (equation (2) shown below). The settings a, a′, b and b′ are generally in practice chosen to be 0, 45°, 22.5° and 67.5° respectively - the "Bell test angles" - these being the ones for which the quantum mechanical formula gives the greatest violation of the inequality.
For each selected value of a and b, the numbers of coincidences in each category (N++, N−−, N+− and N−+) are recorded. The experimental estimate for E(a, b) is then calculated as:
Once all four E’s have been estimated, an experimental estimate of the test statistic
can be found. If S is numerically greater than 2 it has infringed the CHSH inequality. The experiment is declared to have supported the QM prediction and ruled out all local hidden-variable theories.
A strong assumption has had to be made, however, to justify use of expression (2), namely, that the sample of detected pairs is representative of the pairs emitted by the source. Denial of this assumption is called the fair sampling loophole.
Prior to 1982 all actual Bell tests used "single-channel" polarisers and variations on an inequality designed for this setup. The latter is described in Clauser, Horne, Shimony and Holt's much-cited 1969 article as being the one suitable for practical use.[5] As with the CHSH test, there are four subexperiments in which each polariser takes one of two possible settings, but in addition there are other subexperiments in which one or other polariser or both are absent. Counts are taken as before and used to estimate the test statistic.
where the symbol ∞ indicates absence of a polariser.
If S exceeds 0 then the experiment is declared to have infringed the CH inequality and hence to have refuted local hidden-variables. This inequality is known as CH inequality instead of CHSH as it was also derived in a 1974 article by Clauser and Horne more rigorously and under weaker assumptions.[9]
In addition to the theoretical assumptions, there are practical ones. There may, for example, be a number of "accidental coincidences" in addition to those of interest. It is assumed that no bias is introduced by subtracting their estimated number before calculating S, but that this is true is not considered by some to be obvious. There may be synchronisation problems - ambiguity in recognising pairs because in practice they will not be detected at exactly the same time.
Nevertheless, despite all the deficiencies of the actual experiments, one striking fact emerges: the results are, to a very good approximation, what quantum mechanics predicts. If imperfect experiments give us such excellent overlap with quantum predictions, most working quantum physicists would agree with John Bell in expecting that, when a perfect Bell test is done, the Bell inequalities will still be violated. This attitude has led to the emergence of a new sub-field of physics known as quantum information theory. One of the main achievements of this new branch of physics is showing that violation of Bell's inequalities leads to the possibility of a secure information transfer, which utilizes the so-called quantum cryptography (involving entangled states of pairs of particles).
Over the past half century, a great number of Bell test experiments have been conducted. The experiments are commonly interpreted to rule out local hidden-variable theories, and in 2015 an experiment was performed that is not subject to either the locality loophole or the detection loophole (Hensen et al.[10]). An experiment free of the locality loophole is one where for each separate measurement and in each wing of the experiment, a new setting is chosen and the measurement completed before signals could communicate the settings from one wing of the experiment to the other. An experiment free of the detection loophole is one where close to 100% of the successful measurement outcomes in one wing of the experiment are paired with a successful measurement in the other wing. This percentage is called the efficiency of the experiment. Advancements in technology have led to a great variety of methods to test Bell-type inequalities.
Some of the best known and recent experiments include:
Leonard Ralph Kasday, Jack R. Ullman and Chien-Shiung Wu carried out the first experimental Bell test, using photon pairs produced by positronium decay and analyzed by Compton scattering. The experiment observed photon polarization correlations consistent with quantum predictions and inconsistent with local realistic models that obey the known polarization dependence of Compton scattering. Due to the low polarization selectivity of Compton scattering, the results did not violate a Bell inequality.[11] [12]
Stuart J. Freedman and John Clauser carried out the first Bell test that observed a Bell inequality violation, using Freedman's inequality, a variant on the CH74 inequality.[13]
See main article: Aspect's experiment. Alain Aspect and his team at Orsay, Paris, conducted three Bell tests using calcium cascade sources. The first and last used the CH74 inequality. The second was the first application of the CHSH inequality. The third (and most famous) was arranged such that the choice between the two settings on each side was made during the flight of the photons (as originally suggested by John Bell).[14]
The Geneva 1998 Bell test experiments showed that distance did not destroy the "entanglement". Light was sent in fibre optic cables over distances of several kilometers before it was analysed. As with almost all Bell tests since about 1985, a "parametric down-conversion" (PDC) source was used.[15] [16]
In 1998 Gregor Weihs and a team at Innsbruck, led by Anton Zeilinger, conducted an experiment that closed the "locality" loophole, improving on Aspect's of 1982. The choice of detector was made using a quantum process to ensure that it was random. This test violated the CHSH inequality by over 30 standard deviations, the coincidence curves agreeing with those predicted by quantum theory.
This is the first of new Bell-type experiments on more than two particles; this one uses the so-called GHZ state of three particles.[17]
The detection loophole was first closed in an experiment with two entangled trapped ions, carried out in the ion storage group of David Wineland at the National Institute of Standards and Technology in Boulder. The experiment had detection efficiencies well over 90%.[18]
Using semileptonic B0 decays of Υ(4S) at Belle experiment, a clear violation of Bell Inequality in particle-antiparticle correlation is observed.[19]
A specific class of non-local theories suggested by Anthony Leggett is ruled out. Based on this, the authors conclude that any possible non-local hidden-variable theory consistent with quantum mechanics must be highly counterintuitive.[20] [21]
This experiment filled a loophole by providing an 18 km separation between detectors, which is sufficient to allow the completion of the quantum state measurements before any information could have traveled between the two detectors.[22] [23]
This was the first experiment testing Bell inequalities with solid-state qubits (superconducting Josephson phase qubits were used). This experiment surmounted the detection loophole using a pair of superconducting qubits in an entangled state. However, the experiment still suffered from the locality loophole because the qubits were only separated by a few millimeters.[24]
The detection loophole for photons has been closed for the first time by Marissa Giustina, using highly efficient detectors. This makes photons the first system for which all of the main loopholes have been closed, albeit in different experiments.[25]
The Christensen et al. (2013)[26] experiment is similar to that of Giustina et al. Giustina et al. did just four long runs with constant measurement settings (one for each of the four pairs of settings). The experiment was not pulsed so that formation of "pairs" from the two records of measurement results (Alice and Bob) had to be done after the experiment which in fact exposes the experiment to the coincidence loophole. This led to a reanalysis of the experimental data in a way which removed the coincidence loophole, and fortunately the new analysis still showed a violation of the appropriate CHSH or CH inequality.[27] On the other hand, the Christensen et al. experiment was pulsed and measurement settings were frequently reset in a random way, though only once every 1000 particle pairs, not every time.
In 2015 the first three significant-loophole-free Bell-tests were published within three months by independent groups in Delft, Vienna and Boulder. All three tests simultaneously addressed the detection loophole, the locality loophole, and the memory loophole. This makes them “loophole-free” in the sense that all remaining conceivable loopholes like superdeterminism require truly exotic hypotheses that might never get closed experimentally.
The first published experiment by Hensen et al. used a photonic link to entangle the electron spins of two nitrogen-vacancy defect centres in diamonds 1.3 kilometers apart and measured a violation of the CHSH inequality (S = 2.42 ± 0.20). Thereby the local-realist hypothesis could be rejected with a p-value of 0.039.
Both simultaneously published experiments by Giustina et al.[28] and Shalm et al.[29] used entangled photons to obtain a Bell inequality violation with high statistical significance (p-value ≪10−6). Notably, the experiment by Shalm et al. also combined three types of (quasi-)random number generators to determine the measurement basis choices. One of these methods, detailed in an ancillary file, is the “'Cultural' pseudorandom source” which involved using bit strings from popular media such as the Back to the Future films, , Monty Python and the Holy Grail, and the television shows Saved by the Bell and Dr. Who.[30]
Using a witness for Bell correlations derived from a multi-partite Bell inequality, physicists at the University of Basel were able to conclude for the first time Bell correlation in a many-body system composed by about 480 atoms in a Bose-Einstein condensate. Even though loopholes were not closed, this experiment shows the possibility of observing Bell correlations in the macroscopic regime.[31]
Physicists led by David Kaiser of the Massachusetts Institute of Technology and Anton Zeilinger of the Institute for Quantum Optics and Quantum Information and University of Vienna performed an experiment that "produced results consistent with nonlocality" by measuring starlight that had taken 600 years to travel to Earth.[32] The experiment “represents the first experiment to dramatically limit the space-time region in which hidden variables could be relevant.”[33] [34] [35]
Physicists at the Ludwig Maximilian University of Munich and the Max Planck Institute of Quantum Optics published results from an experiment in which they observed a Bell inequality violation using entangled spin states of two atoms with a separation distance of 398 meters in which the detection loophole, the locality loophole, and the memory loophole were closed. The violation of S = 2.221 ± 0.033 rejected local realism with a significance value of P = 1.02×10−16 when taking into account 7 months of data and 55000 events or an upper bound of P = 2.57×10−9 from a single run with 10000 events.[36]
An international collaborative scientific effort used arbitrary human choice to define measurement settings instead of using random number generators. Assuming that human free will exists, this would close the “freedom-of-choice loophole”. Around 100,000 participants were recruited in order to provide sufficient input for the experiment to be statistically significant.[37]
In 2018, an international team used light from two quasars (one whose light was generated approximately eight billion years ago and the other approximately twelve billion years ago) as the basis for their measurement settings.[38] This experiment pushed the timeframe for when the settings could have been mutually determined to at least 7.8 billion years in the past, a substantial fraction of the superdeterministic limit (that being the creation of the universe 13.8 billion years ago).[39]
The 2019 PBS Nova episode Einstein's Quantum Riddle documents this "cosmic Bell test" measurement, with footage of the scientific team on-site at the high-altitude Teide Observatory located in the Canary Islands.[40]
In 2023, an international team led by the group of Andreas Wallraff at ETH Zurich demonstrated a loophole-free violation of the CHSH inequality with superconducting circuits deterministically entangled via a cryogenic link spanning a distance of 30 meters.[41]
Though the series of increasingly sophisticated Bell test experiments has convinced the physics community that local hidden-variable theories are indefensible; they can never be excluded entirely.[42] For example, the hypothesis of superdeterminism in which all experiments and outcomes (and everything else) are predetermined cannot be excluded (because it is unfalsifiable).
Up to 2015, the outcome of all experiments that violate a Bell inequality could still theoretically be explained by exploiting the detection loophole and/or the locality loophole. The locality (or communication) loophole means that since in actual practice the two detections are separated by a time-like interval, the first detection may influence the second by some kind of signal. To avoid this loophole, the experimenter has to ensure that particles travel far apart before being measured, and that the measurement process is rapid. More serious is the detection (or unfair sampling) loophole, because particles are not always detected in both wings of the experiment. It can be imagined that the complete set of particles would behave randomly, but instruments only detect a subsample showing quantum correlations, by letting detection be dependent on a combination of local hidden variables and detector setting.
Experimenters had repeatedly voiced that loophole-free tests could be expected in the near future.[43] [44] In 2015, a loophole-free Bell violation was reported using entangled diamond spins over a distance of 1.3km (00.8miles) and corroborated by two experiments using entangled photon pairs.
The remaining possible theories that obey local realism can be further restricted by testing different spatial configurations, methods to determine the measurement settings, and recording devices. It has been suggested that using humans to generate the measurement settings and observe the outcomes provides a further test.[45] David Kaiser of MIT told the New York Times in 2015 that a potential weakness of the "loophole-free" experiments is that the systems used to add randomness to the measurement may be predetermined in a method that was not detected in experiments.[46]
A common problem in optical Bell tests is that only a small fraction of the emitted photons are detected. It is then possible that the correlations of the detected photons are unrepresentative: although they show a violation of a Bell inequality, if all photons were detected the Bell inequality would actually be respected. This was first noted by Philip M. Pearle in 1970,[47] who devised a local hidden variable model that faked a Bell violation by letting the photon be detected only if the measurement setting was favourable. The assumption that this does not happen, i.e., that the small sample is actually representative of the whole is called the fair sampling assumption.
To do away with this assumption it is necessary to detect a sufficiently large fraction of the photons. This is usually characterized in terms of the detection efficiency
η
η>2\sqrt2-2 ≈ 0.83
η>2/3 ≈ 0.67
η>(\sqrt5-1)/2 ≈ 0.62
Historically, only experiments with non-optical systems have been able to reach high enough efficiencies to close this loophole, such as trapped ions,[52] superconducting qubits,[53] and nitrogen-vacancy centers.[54] These experiments were not able to close the locality loophole, which is easy to do with photons. More recently, however, optical setups have managed to reach sufficiently high detection efficiencies by using superconducting photodetectors, and hybrid setups have managed to combine the high detection efficiency typical of matter systems with the ease of distributing entanglement at a distance typical of photonic systems.
One of the assumptions of Bell's theorem is the one of locality, namely that the choice of setting at a measurement site does not influence the result of the other. The motivation for this assumption is the theory of relativity, that prohibits communication faster than light. For this motivation to apply to an experiment, it needs to have space-like separation between its measurements events. That is, the time that passes between the choice of measurement setting and the production of an outcome must be shorter than the time it takes for a light signal to travel between the measurement sites.[55]
The first experiment that strived to respect this condition was Aspect's 1982 experiment.[56] In it the settings were changed fast enough, but deterministically. The first experiment to change the settings randomly, with the choices made by a quantum random number generator, was Weihs et al.'s 1998 experiment.[57] Scheidl et al. improved on this further in 2010 by conducting an experiment between locations separated by a distance of .[58]
In many experiments, especially those based on photon polarization, pairs of events in the two wings of the experiment are only identified as belonging to a single pair after the experiment is performed, by judging whether or not their detection times are close enough to one another. This generates a new possibility for a local hidden variables theory to "fake" quantum correlations: delay the detection time of each of the two particles by a larger or smaller amount depending on some relationship between hidden variables carried by the particles and the detector settings encountered at the measurement station.[59]
The coincidence loophole can be ruled out entirely simply by working with a pre-fixed lattice of detection windows which are short enough that most pairs of events occurring in the same window do originate with the same emission and long enough that a true pair is not separated by a window boundary.[59]
In most experiments, measurements are repeatedly made at the same two locations. A local hidden variable theory could exploit the memory of past measurement settings and outcomes in order to increase the violation of a Bell inequality. Moreover, physical parameters might be varying in time. It has been shown that, provided each new pair of measurements is done with a new random pair of measurement settings, that neither memory nor time inhomogeneity have a serious effect on the experiment.[60] [61] [62]
See main article: Superdeterminism. A necessary assumption to derive Bell's theorem is that the hidden variables are not correlated with the measurement settings. This assumption has been justified on the grounds that the experimenter has "free will" to choose the settings, and that such is necessary to do science in the first place. A (hypothetical) theory where the choice of measurement is determined by the system being measured is known as superdeterministic.[63]
The many-worlds interpretation, also known as the Hugh Everett interpretation, is deterministic and has local dynamics, consisting of the unitary part of quantum mechanics without collapse. Bell's theorem does not apply because of an implicit assumption that measurements have a single outcome.[64]