In quantum mechanics, quantum correlation is the expected value of the product of the alternative outcomes. In other words, it is the expected change in physical characteristics as one quantum system passes through an interaction site. In John Bell's 1964 paper that inspired the Bell test, it was assumed that the outcomes A and B could each only take one of two values, -1 or +1. It followed that the product, too, could only be -1 or +1, so that the average value of the product would be
N++-N+--N-++N-- | |
Ntotal |
where, for example, N++ is the number of simultaneous instances ("coincidences") of the outcome +1 on both sides of the experiment.
However, in actual experiments, detectors are not perfect and produce many null outcomes. The correlation can still be estimated using the sum of coincidences, since clearly zeros do not contribute to the average, but in practice, instead of dividing by Ntotal, it is customary to divide by
N+++N+-+N-++N--
the total number of observed coincidences. The legitimacy of this method relies on the assumption that the observed coincidences constitute a fair sample of the emitted pairs.
Following local realist assumptions as in Bell's paper, the estimated quantum correlation converges after a sufficient number of trials to
QC(a,b)=\intdλ\rho(λ)A(a,λ)B(b,λ)
where a and b are detector settings and λ is the hidden variable, drawn from a distribution ρ(λ).
The quantum correlation is the key statistic in the CHSH inequality and some of the other Bell inequalities, tests that open the way for experimental discrimination between quantum mechanics and local realism or local hidden-variable theory.
Quantum correlations give rise to various phenomena, including interference of particles separated in time.[1] [2]