Boson sampling is a restricted model of non-universal quantum computation introduced by Scott Aaronson and Alex Arkhipov[1] after the original work of Lidror Troyansky and Naftali Tishby, that explored possible usage of boson scattering to evaluate expectation values of permanents of matrices.[2] The model consists of sampling from the probability distribution of identical bosons scattered by a linear interferometer. Although the problem is well defined for any bosonic particles, its photonic version is currently considered as the most promising platform for a scalable implementation of a boson sampling device, which makes it a non-universal approach to linear optical quantum computing. Moreover, while not universal, the boson sampling scheme is strongly believed to implement computing tasks which are hard to implement with classical computers by using far fewer physical resources than a full linear-optical quantum computing setup. This advantage makes it an ideal candidate for demonstrating the power of quantum computation in the near term.
Consider a multimode linear-optical circuit of N modes that is injected with M indistinguishable single photons (N>M). Then, the photonic implementation of the boson sampling task consists of generating a sample from the probability distribution of single-photon measurements at the output of the circuit. Specifically, this requires reliable sources of single photons (currently the most widely used ones are parametric down-conversion crystals), as well as a linear interferometer. The latter can be fabricated, e.g., with fused-fiber beam splitters,[3] through silica-on-silicon[4] or laser-written[5] [6] [7] integrated interferometers, or electrically and optically interfaced optical chips.[8] Finally, the scheme also necessitates high efficiency single photon-counting detectors, such as those based on current-biased superconducting nanowires, which perform the measurements at the output of the circuit. Therefore, based on these three ingredients, the boson sampling setup does not require any ancillas, adaptive measurements or entangling operations, as does e.g. the universal optical scheme by Knill, Laflamme and Milburn (the KLM scheme). This makes it a non-universal model of quantum computation, and reduces the amount of physical resources needed for its practical realization.
Specifically, suppose the linear interferometer is described by an N×N unitary matrix
U,
\dagger | |
a | |
i |
(a | |
i |
)
\dagger | |
b | |
j=\sum |
N | |
i=1 |
Uji
\dagger | |
a | |
i (b |
j=\sum
N | |
i=1 |
* | |
U | |
ji |
ai).
Here i (j) labels the input (output) modes, and
\dagger | |
b | |
j |
(b | |
j |
)
U
\varphiM(U)
M
\varphiM
N
\tbinom{M+N-1}{M}
W
Suppose the interferometer is injected with an input state of single photons
|\psiin\rangle=
|s1,s2,...,sN\rangle
N | |
\sum | |
k=1 |
sk=M
(sk
|\psiout\rangle
the output of the circuit can be written down as
|\psiout\rangle=\varphiM(U)|s1,s2,...,sN\rangle.
U
\varphiM(U)
We define the isomorphism for the basis states:
P | |
|s1,s2,...,sN\rangle |
(
)\equiv
sN | |
x | |
N |
P{\varphi(U)|s1,s2,...,sN\rangle}(
)=
P | |
|s1,s2,...,sN\rangle |
(U
)
Consequently, the probability
p(t1,t2,...,tN)
tk
p(t1,t2,...,tN)=|\langlet1,t2,...,tN|\psiout
| ||||
\rangle| |
.
In the above expression
PermUS,T
US,T,
U
si
tj
|1M\rangle,
p(t1,t2,...,tN)=|\langlet1,t2,...,tN|\varphiM(U)
| ||||
|1 | ||||
M\rangle| |
,
where the matrix
UT
U
tj
U
The main reason of the growing interest towards the model of boson sampling is that despite being non-universal it is strongly believed to perform a computational task that is intractable for a classical computer. One of the main reasons behind this is that the probability distribution, which the boson sampling device has to sample from, is related to the permanent of complex matrices. The computation of the permanent is in the general case an extremely hard task: it falls in the
complexity class. Moreover, its approximation to within multiplicative error is a
problem as well.
All current proofs of the hardness of simulating boson sampling on a classical computer rely on the strong computational consequences that its efficient simulation by a classical algorithm would have. Namely, these proofs show that an efficient classical simulation would imply the collapse of the polynomial hierarchy to its third level, a possibility that is considered very unlikely by the computer science community, due to its strong computational implications (in line with the strong implications of P=NP problem).
The hardness proof of the exact boson sampling problem can be achieved following two distinct paths. Specifically, the first one uses the tools of the computational complexity theory and combines the following two facts:
p(t1,t2,...,tN)
p(t1,t2,...,tN)
When combined these two facts along with Toda's theorem result in the collapse of the polynomial hierarchy, which as mentioned above is highly unlikely to occur. This leads to the conclusion that there is no classical polynomial-time algorithm for the exact boson sampling problem.
On the other hand, the alternative proof is inspired by a similar result for another restricted model of quantum computation – the model of instantaneous quantum computing.[11] Namely, the proof uses the KLM scheme, which says that linear optics with adaptive measurements is universal for the class BQP. It also relies on the following facts:
Again, the combination of these three results, as in the previous case, results in the collapse of the polynomial hierarchy. This makes the existence of a classical polynomial-time algorithm for the exact boson sampling problem highly unlikely.
The best proposed classical algorithm for exact boson sampling runs in time
O(n2n+mn2)
The above hardness proofs are not applicable to the realistic implementation of a boson sampling device, due to the imperfection of any experimental setup (including the presence of noise, decoherence, photon losses, etc.). Therefore, for practical needs one necessitates the hardness proof for the corresponding approximate task. The latter consists of sampling from a probability distribution that is
\varepsilon
p(t1,t2,...,tN)
Specifically, the proofs of the exact boson sampling problem cannot be directly applied here, since they are based on the #P-hardness of estimating the exponentially-small probability
p(t1,t2,...,tN)
p(t1,t2,...,tN)
p(t1,t2,...,tN)
U
p(t1,t2,...,tN)
p(t1,t2,...,tN)
X\sim
M x M | |
l{N}(0,1) | |
l{C |
U.
X\sim
M x M | |
l{N}(0,1) | |
l{C |
Moreover, the above conjecture can be linked to the estimation of
|PermX|2,
X\sim
M x M | |
l{N}(0,1) | |
l{C |
|PermX|< | \sqrt{M! |
By making use of the above two conjectures (which have several evidences of being true), the final proof eventually states that the existence of a classical polynomial-time algorithm for the approximate boson sampling task implies the collapse of the polynomial hierarchy. It is also worth mentioning another fact important to the proof of this statement, namely the so-called bosonic birthday paradox (in analogy with the well-known birthday paradox). The latter states that if M identical bosons are scattered among N≫M2 modes of a linear interferometer with no two bosons in the same mode, then with high probability two bosons will not be found in the same output mode either.[15] This property has been experimentally observed[16] with two and three photons in integrated interferometers of up to 16 modes. On the one hand this feature facilitates the implementation of a restricted boson sampling device. Namely, if the probability of having more than one photon at the output of a linear optical circuit is negligible, one does not require photon-number-resolving detectors anymore: on-off detectors will be sufficient for the realization of the setup.
Although the probability
p(t1,t2,...,tN)
As already mentioned above, for the implementation of a boson sampling machine one necessitates a reliable source of many indistinguishable photons, and this requirement currently remains one of the main difficulties in scaling up the complexity of the device. Namely, despite recent advances in photon generation techniques using atoms, molecules, quantum dots and color centers in diamonds, the most widely used method remains the parametric down-conversion (PDC) mechanism. The main advantages of PDC sources are the high photon indistinguishability, collection efficiency and relatively simple experimental setups. However, one of the drawbacks of this approach is its non-deterministic (heralded) nature. Specifically, suppose the probability of generating a single photon by means of a PDC crystal is ε. Then, the probability of generating simultaneously M single photons is εM, which decreases exponentially with M. In other words, in order to generate the input state for the boson sampling machine, one would have to wait for exponentially long time, which would kill the advantage of the quantum setup over a classical machine. Subsequently, this characteristic restricted the use of PDC sources to proof-of-principle demonstrations of a boson sampling device.
Recently, however, a new scheme has been proposed to make the best use of PDC sources for the needs of boson sampling, greatly enhancing the rate of M-photon events. This approach has been named scattershot boson sampling,[18] [19] which consists of connecting N (N>M) heralded single-photon sources to different input ports of the linear interferometer. Then, by pumping all N PDC crystals with simultaneous laser pulses, the probability of generating M photons will be given as
\tbinom{N}{M}\varepsilonM.
Scattershot boson sampling is still intractable for a classical computer: in the conventional setup we fixed the columns that defined our M×M submatrix and only varied the rows, whereas now we vary the columns too, depending on which M out of N PDC crystals generated single photons. Therefore, the proof can be constructed here similar to the original one. Furthermore, scattershot boson sampling has been also recently implemented with six photon-pair sources coupled to integrated photonic circuits of nine and thirteen modes, being an important leap towards a convincing experimental demonstration of the quantum computational supremacy.[20] The scattershot boson sampling model can be further generalized to the case where both legs of PDC sources are subject to linear optical transformations (in the original scattershot case, one of the arms is used for heralding, i.e., it goes through the identity channel). Such a twofold scattershot boson sampling model is also computationally hard, as proven by making use of the symmetry of quantum mechanics under time reversal.
Another photonic implementation of boson sampling concerns Gaussian input states, i.e. states whose quasiprobability Wigner distribution function is a Gaussian one. The hardness of the corresponding sampling task can be linked to that of scattershot boson sampling.[21] Namely, the latter can be embedded into the conventional boson sampling setup with Gaussian inputs. For this, one has to generate two-mode entangled Gaussian states and apply a Haar-random unitary
U
U.
The above results state that the existence of a polynomial-time classical algorithm for the original boson sampling scheme with indistinguishable single photons (in the exact and approximate cases), for scattershot, as well as for the general Gaussian boson sampling problems is highly unlikely. Nevertheless, there are some non-trivial realizations of the boson sampling problem that allow for its efficient classical simulation. One such example is when the optical circuit is injected with distinguishable single photons. In this case, instead of summing the probability amplitudes corresponding to photonic many-particle paths, one has to sum the corresponding probabilities (i.e. the squared absolute values of the amplitudes). Consequently, the detection probability
p(t1,t2,...,tN)
U.
Another instance of classically simulable boson sampling setups consists of sampling from the probability distribution of coherent states injected into the linear interferometer. The reason is that at the output of a linear optical circuit coherent states remain such, and do not create any quantum entanglement among the modes. More precisely, only their amplitudes are transformed, and the transformation can be efficiently calculated on a classical computer (the computation comprises matrix multiplication). This fact can be used to perform corresponding sampling tasks from another set of states: so-called classical states, whose Glauber-Sudarshan P function is a well-defined probability distribution. These states can be represented as a mixture of coherent states due to the optical equivalence theorem. Therefore, picking random coherent states distributed according to the corresponding P function, one can perform efficient classical simulation of boson sampling from this set of classical states.[25] [26]
The above requirements for the photonic boson sampling machine allow for its small-scale construction by means of existing technologies. Consequently, shortly after the theoretical model was introduced, four different groupssimultaneously reported its realization.
Specifically, this included the implementation of boson sampling with:
Later on, more complex boson sampling experiments have been performed, increasing the number of spatial modes of random interferometers up to 13[27] and 9[28] modes, and realizing a 6-mode fully reconfigurable integrated circuit.These experiments altogether constitute the proof-of-principle demonstrations of an operational boson sampling device, and route towards its larger-scale implementations.
A first scattershot boson sampling experiment has been recently implemented using six photon-pair sources coupled to integrated photonic circuits with 13 modes. The 6 photon-pair sources were obtained via type-II PDC processes in 3 different nonlinear crystals (exploiting the polarization degree of freedom). This allowed to sample simultaneously between 8 different input states. The 13-mode interferometer was realized by femtosecond laser-writing technique on alumino-borosilicate glas.
This experimental implementation represents a leap towards an experimental demonstration of the quantum computational supremacy.
There are several other proposals for the implementation of photonic boson sampling. This includes, e.g., the scheme for arbitrarily scalable boson sampling using two nested fiber loops. In this case, the architecture employs time-bin encoding, whereby the incident photons form a pulse train entering the loops. Meanwhile, dynamically controlled loop coupling ratios allow the construction of arbitrary linear interferometers. Moreover, the architecture employs only a single point of interference and may thus be easier to stabilize than other implementations.[29]
Another approach relies on the realization of unitary transformations on temporal modes based on dispersion and pulse shaping. Namely, passing consecutively heralded photons through time-independent dispersion and measuring the output time of the photons is equivalent to a boson sampling experiment. With time-dependent dispersion, it is also possible to implement arbitrary single-particle unitaries. This scheme requires a much smaller number of sources and detectors and do not necessitate a large system of beam splitters.[30]
The output of a universal quantum computer running, for example, Shor's factoring algorithm, can be efficiently verified classically, as is the case for all problems inthe non-deterministic polynomial-time (NP) complexity class. It is however not clear that a similar structureexists for the boson sampling scheme. Namely, as the latter is related to the problem of estimating matrix permanents (falling into
complexity class), it is not understood how to verify correct operation for large versions of the setup. Specifically, the naive verification of the output of a boson sampler by computing the corresponding measurement probabilities represents a problem intractable for a classical computer.
A first relevant question is whether it is possible or not to distinguish between uniform and boson-sampling distributions by performing a polynomial number of measurements. The initial argument introduced in Ref.[31] stated that as long as one makes use of symmetric measurement settings the above is impossible (roughly speaking a symmetric measurement scheme does not allow for labeling the output modes of the optical circuit). However, within current technologies the assumption of a symmetric setting is not justified (the tracking of the measurement statistics is fully accessible), and therefore the above argument does not apply. It is then possible to define a rigorous and efficient test to discriminate the boson sampling statistics from an unbiased probability distribution.[32] The corresponding discriminator is correlated to the permanent of the submatrix associated with a given measurement pattern, but can be efficiently calculated. This test has been applied experimentally to distinguish between a boson sampling and a uniform distribution in the 3-photon regime with integrated circuits of 5, 7, 9 and 13 modes.
The test above does not distinguish between more complex distributions, such as quantum and classical, or between fermionic and bosonic statistics. A physically motivated scenario to be addressed is the unwanted introduction of distinguishability between photons, which destroys quantum interference (this regime is readily accessible experimentally, for example by introducing temporal delay between photons). The opportunity then exists to tune between ideally indistinguishable (quantum) and perfectly distinguishable (classical) data and measure the change in a suitably constructed metric. This scenario can be addressed by a statistical test which performs a one-on-one likelihood comparison of the output probabilities. This test requires the calculation of a small number of permanents, but does not need the calculation of the full expected probability distribution. Experimental implementation of the test has been successfully reported in integrated laser-written circuits for both the standard boson sampling (3 photons in 7-, 9- and 13-mode interferometers) and the scattershot version (3 photons in 9- and 13-mode interferometers with different input states). Another possibility is based on the bunching property of indinguishable photons. One can analyze the probability to find a k-fold coincidence measurement outcomes (without any multiply populated input mode), which is significantly higher for distinguishable particles than for bosons due to the bunching tendency of the latters. Finally, leaving the space of random matrices one may focus on specific multimode setups with certain features. In particular, the analysis of the effect of bosonic clouding (the tendency for bosons to favor events with all particles in the same half of the output array of a continuous-time many-particle quantum walk) has been proven to discriminate the behavior of distinguishable and indistinguishable particles in this specific platform.
A different approach to confirm that the boson sampling machine behaves as the theory predicts is to make use of fully reconfigurable optical circuits. With large-scale single-photon and multiphoton interference verified with predictable multimode correlations in a fully characterized circuit, a reasonable assumption is that the system maintains correct operation as the circuit is continuously reconfigured to implement a random unitary operation. To this end, one can exploit quantum suppression laws (the probability of specific input-output combinations is suppressed when the linear interferometer is described by a Fourier matrix or other matrices with relevant symmetries).[33] These suppression laws can be classically predicted in efficient ways. This approach allows also to exclude other physical models, such as mean-field states, which mimic some collective multiparticle properties (including bosonic clouding). The implementation of a Fourier matrix circuit in a fully reconfigurable 6-mode device has been reported, and experimental observations of the suppression law have been shown for 2 photons in 4- and 8-mode Fourier matrices.[34]
Apart from the photonic realization of the boson sampling task, several other setups have been proposed. This includes, e.g., the encoding of bosons into the local transverse phonon modes of trapped ions. The scheme allows deterministic preparation and high-efficiency readout of the corresponding phonon Fock states and universal manipulation of the phonon modes through a combination of inherent Coulomb interaction and individual phase shifts.[35] This scheme is scalable and relies on the recent advances in ion trapping techniques (several dozens of ions can be successfully trapped, for example, in linear Paul traps by making use of anharmonic axial potentials).
Another platform for implementing the boson sampling setup is a system of interacting spins: recent observation show that boson sampling with M particles in N modes is equivalent to the short-time evolution with M excitations in the XY model of 2N spins.[36] One necessitates several additional assumptions here, including small boson bunching probability and efficient error postselection. This scalable scheme, however, is rather promising, in the light of considerable development in the construction and manipulation of coupled superconducting qubits and specifically the D-Wave machine.
The task of boson sampling shares peculiar similarities with the problem of determining molecular vibronic spectra: a feasible modification of the boson sampling scheme results in a setup that can be used for the reconstruction of a molecule's Franck–Condon profiles (for which no efficient classical algorithm is currently known). Specifically, the task now is to input specific squeezed coherent states into a linear interferometer that is determined by the properties of the molecule of interest.[37] Therefore, this prominent observation makes the interest towards the implementation of the boson sampling task to get spread well beyond the fundamental basis.
It has also been suggested to use a superconducting resonator network Boson Sampling device as an interferometer. This application is assumed to be practical, as small changes in the couplings between the resonators will change the sampling results. Sensing of variation in the parameters capable of altering the couplings is thus achieved, when comparing the sampling results to an unaltered reference.[38]
Variants of the boson sampling model have been used to construct classical computational algorithms, aimed, e.g., at the estimation of certain matrix permanents (for instance, permanents of positive-semidefinite matrices related to the corresponding open problem in computer science[39]) by combining tools proper to quantum optics and computational complexity.[40]
Coarse-grained boson sampling has been proposed as a resource of decision and function problems that are computationally hard, and may thus have cryptographic applications.[41] [42] [43] The first related proof-of-principle experiment was performed with a photonic boson-sampling machine (fabricated by a direct femtosecond laser-writing technique),[44] and confirmed many of the theoretical predictions.
Gaussian boson sampling has been analyzed as a search component for computing binding propensity between molecules of pharmacological interest as well.[45]