Quantum optimization algorithms are quantum algorithms that are used to solve optimization problems.[1] Mathematical optimization deals with finding the best solution to a problem (according to some criteria) from a set of possible solutions. Mostly, the optimization problem is formulated as a minimization problem, where one tries to minimize an error which depends on the solution: the optimal solution has the minimal error. Different optimization techniques are applied in various fields such as mechanics, economics and engineering, and as the complexity and amount of data involved rise, more efficient ways of solving optimization problems are needed. Quantum computing may allow problems which are not practically feasible on classical computers to be solved, or suggest a considerable speed up with respect to the best known classical algorithm.
Data fitting is a process of constructing a mathematical function that best fits a set of data points. The fit's quality is measured by some criteria, usually the distance between the function and the data points.
One of the most common types of data fitting is solving the least squares problem, minimizing the sum of the squares of differences between the data points and the fitted function.
The algorithm is given
N
(x1,y1),(x2,y2),...,(xN,yN)
M
f1,f2,...,fM
f\vec{λ
fj
f\vec{λ
λj
\vec{λ}=(λ1,λ2,...,λM)
The algorithm is aimed at minimizing the error, which is given by:
N | |
E=\sum | |
i=1 |
\left\vertf\vec{λ
F
{F}=\begin{pmatrix} f1(x1)& … &fM(x1)\\ f1(x2)& … &fM(x2)\\ \vdots&\ddots&\vdots\\ f1(xN)& … &fM(xN)\\ \end{pmatrix}
The quantum least-squares fitting algorithm[2] makes use of a version of Harrow, Hassidim, and Lloyd's quantum algorithm for linear systems of equations (HHL), and outputs the coefficients
λj
E
Because the quantum algorithm is mainly based on the HHL algorithm, it suggests an exponential improvement[3] in the case where
F
FF\dagger
F\daggerF
Semidefinite programming (SDP) is an optimization subfield dealing with the optimization of a linear objective function (a user-specified function to be minimized or maximized), over the intersection of the cone of positive semidefinite matrices with an affine space. The objective function is an inner product of a matrix
C
X
Sn
n x n
X
n | |
S | |
+ |
\langle
A,B\rangle | |
Sn |
={\rmtr}(ATB)=
n | |
\sum | |
i=1,j=1 |
AijBij.
The problem may have additional constraints (given as inputs), also usually formulated as inner products. Each constraint forces the inner product of the matrices
Ak
X
bk
\begin{array}{rl} {\displaystylemin | |
X\inSn |
The best classical algorithm is not known to unconditionally run in polynomial time. The corresponding feasibility problem is known to either lie outside of the union of the complexity classes NP and co-NP, or in the intersection of NP and co-NP.[4]
The algorithm inputs are
A1...Am,C,b1...bm
The quantum algorithm[5] consists of several iterations. In each iteration, it solves a feasibility problem, namely, finds any solution satisfying the following conditions (giving a threshold
t
\begin{array}{lr} \langleC,X
\rangle | |
Sn |
\leqt\\ \langleAk,X
\rangle | |
Sn |
\leqbk, k=1,\ldots,m\\ X\succeq0 \end{array}
In each iteration, a different threshold
t
X
\langleC,
X\rangle | |
Sn |
\leqt
t
X
The quantum algorithm provides a quadratic improvement over the best classical algorithm in the general case, and an exponential improvement when the input matrices are of low rank.
The combinatorial optimization problem is aimed at finding an optimal object from a finite set of objects. The problem can be phrased as a maximization of an objective function which is a sum of boolean functions. Each boolean function
C\alpha\colon\lbrace{0,1\rbrace}n → \lbrace{0,1}\rbrace
n
z=z1z2\ldotszn
n
m
n
z
C(z)=
m | |
\sum | |
\alpha=1 |
C\alpha(z)
Approximate optimization is a way of finding an approximate solution to an optimization problem, which is often NP-hard. The approximated solution of the combinatorial optimization problem is a string
z
C(z)
For combinatorial optimization, the quantum approximate optimization algorithm (QAOA)[6] briefly had a better approximation ratio than any known polynomial time classical algorithm (for a certain problem),[7] until a more effective classical algorithm was proposed.[8] The relative speed-up of the quantum algorithm is an open research question.
QAOA consists of the following steps:
HC
HM
UC(\gamma)=\exp(-\imath\gammaHC)
UM(\alpha)=\exp(-\imath\alphaHM)
\gamma
UC
UM
U(\boldsymbol\gamma,\boldsymbol\alpha)=
N | |
\coprod | |
i=1 |
(UC(\gammai)UM(\alphai))
U(\boldsymbol\gamma,\boldsymbol\alpha)
\boldsymbol\gamma,\boldsymbol\alpha
HC
HM
HC
2p
p>1
U(\boldsymbol\gamma,\boldsymbol\alpha)
C(z)
C(z)
C(z)
The goal here is to find a minimum vertex cover of a graph: a collection of vertices such that each edge in the graph contains at least one of the vertices in the cover. Hence, these vertices “cover” all the edges. We wish to find a vertex cover that has the smallest possible number of vertices. Vertex covers can be represented by a bit string where each bit denotes whether the corresponding vertex is present in the cover. For example, the bit string 0101 represents a cover consisting of the second and fourth vertex in a graph with four vertices.Consider the graph given in the figure. It has four vertices and there are two minimum vertex cover for this graph: vertices 0 and 2, and the vertices 1 and 2. These can be respectively represented by the bit strings 1010 and 0110. The goal of the algorithm is to sample these bit strings with high probability. In this case, the cost Hamiltonian has two ground states, |1010⟩ and |0110⟩, coinciding with the solutions of the problem. The mixer Hamiltonian is the simple, non-commuting sum of Pauli-X operations on each node of the graph and they are given by:
HC=-0.25Z3+0.5Z0+0.5Z1+1.25Z2+0.75(Z0Z1+Z0Z2+Z2Z3+Z1Z2)
HM=X0+X1+X2+X3