Quantum optimization algorithms explained

Quantum optimization algorithms are quantum algorithms that are used to solve optimization problems.[1] Mathematical optimization deals with finding the best solution to a problem (according to some criteria) from a set of possible solutions. Mostly, the optimization problem is formulated as a minimization problem, where one tries to minimize an error which depends on the solution: the optimal solution has the minimal error. Different optimization techniques are applied in various fields such as mechanics, economics and engineering, and as the complexity and amount of data involved rise, more efficient ways of solving optimization problems are needed. Quantum computing may allow problems which are not practically feasible on classical computers to be solved, or suggest a considerable speed up with respect to the best known classical algorithm.

Quantum data fitting

Data fitting is a process of constructing a mathematical function that best fits a set of data points. The fit's quality is measured by some criteria, usually the distance between the function and the data points.

Quantum least squares fitting

One of the most common types of data fitting is solving the least squares problem, minimizing the sum of the squares of differences between the data points and the fitted function.

The algorithm is given

N

input data points

(x1,y1),(x2,y2),...,(xN,yN)

and

M

continuous functions

f1,f2,...,fM

. The algorithm finds and gives as output a continuous function

f\vec{λ

} that is a linear combination of

fj

:

f\vec{λ

}(x) = \sum_^M f_(x)\lambda_ In other words, the algorithm finds the complex coefficients

λj

, and thus the vector

\vec{λ}=(λ1,λ2,...,λM)

.

The algorithm is aimed at minimizing the error, which is given by:

N
E=\sum
i=1

\left\vertf\vec{λ

}(x_i)-y_i \right\vert^2 = \sum_^N \left\vert \sum_^Mf_(x_i)\lambda_-y_i \right\vert^2 = \left\vert F\vec-\vec \right\vert^2 where

F

is defined to be the following matrix:

{F}=\begin{pmatrix} f1(x1)&&fM(x1)\\ f1(x2)&&fM(x2)\\ \vdots&\ddots&\vdots\\ f1(xN)&&fM(xN)\\ \end{pmatrix}

The quantum least-squares fitting algorithm[2] makes use of a version of Harrow, Hassidim, and Lloyd's quantum algorithm for linear systems of equations (HHL), and outputs the coefficients

λj

and the fit quality estimation

E

. It consists of three subroutines: an algorithm for performing a pseudo-inverse operation, one routine for the fit quality estimation, and an algorithm for learning the fit parameters.

Because the quantum algorithm is mainly based on the HHL algorithm, it suggests an exponential improvement[3] in the case where

F

is sparse and the condition number (namely, the ratio between the largest and the smallest eigenvalues) of both

FF\dagger

and

F\daggerF

is small.

Quantum semidefinite programming

Semidefinite programming (SDP) is an optimization subfield dealing with the optimization of a linear objective function (a user-specified function to be minimized or maximized), over the intersection of the cone of positive semidefinite matrices with an affine space. The objective function is an inner product of a matrix

C

(given as an input) with the variable

X

. Denote by

Sn

the space of all

n x n

symmetric matrices. The variable

X

must lie in the (closed convex) cone of positive semidefinite symmetric matrices
n
S
+
. The inner product of two matrices is defined as:

\langle

A,B\rangle
Sn

={\rmtr}(ATB)=

n
\sum
i=1,j=1

AijBij.

The problem may have additional constraints (given as inputs), also usually formulated as inner products. Each constraint forces the inner product of the matrices

Ak

(given as an input) with the optimization variable

X

to be smaller than a specified value

bk

(given as an input). Finally, the SDP problem can be written as:
\begin{array}{rl} {\displaystylemin
X\inSn
} & \langle C, X \rangle_ \\\text & \langle A_k, X \rangle_ \leq b_k, \quad k = 1,\ldots,m \\& X \succeq 0\end

The best classical algorithm is not known to unconditionally run in polynomial time. The corresponding feasibility problem is known to either lie outside of the union of the complexity classes NP and co-NP, or in the intersection of NP and co-NP.[4]

The quantum algorithm

The algorithm inputs are

A1...Am,C,b1...bm

and parameters regarding the solution's trace, precision and optimal value (the objective function's value at the optimal point).

The quantum algorithm[5] consists of several iterations. In each iteration, it solves a feasibility problem, namely, finds any solution satisfying the following conditions (giving a threshold

t

):

\begin{array}{lr} \langleC,X

\rangle
Sn

\leqt\\ \langleAk,X

\rangle
Sn

\leqbk,k=1,\ldots,m\\ X\succeq0 \end{array}

In each iteration, a different threshold

t

is chosen, and the algorithm outputs either a solution

X

such that

\langleC,

X\rangle
Sn

\leqt

(and the other constraints are satisfied, too) or an indication that no such solution exists. The algorithm performs a binary search to find the minimal threshold

t

for which a solution

X

still exists: this gives the minimal solution to the SDP problem.

The quantum algorithm provides a quadratic improvement over the best classical algorithm in the general case, and an exponential improvement when the input matrices are of low rank.

Quantum combinatorial optimization

The combinatorial optimization problem is aimed at finding an optimal object from a finite set of objects. The problem can be phrased as a maximization of an objective function which is a sum of boolean functions. Each boolean function

C\alpha\colon\lbrace{0,1\rbrace}n\lbrace{0,1}\rbrace

gets as input the

n

-bit string

z=z1z2\ldotszn

and gives as output one bit (0 or 1). The combinatorial optimization problem of

n

bits and

m

clauses is finding an

n

-bit string

z

that maximizes the function

C(z)=

m
\sum
\alpha=1

C\alpha(z)

Approximate optimization is a way of finding an approximate solution to an optimization problem, which is often NP-hard. The approximated solution of the combinatorial optimization problem is a string

z

that is close to maximizing

C(z)

.

Quantum approximate optimization algorithm

For combinatorial optimization, the quantum approximate optimization algorithm (QAOA)[6] briefly had a better approximation ratio than any known polynomial time classical algorithm (for a certain problem),[7] until a more effective classical algorithm was proposed.[8] The relative speed-up of the quantum algorithm is an open research question.

QAOA consists of the following steps:

  1. Defining a cost Hamiltonian

HC

such that its ground state encodes the solution to the optimization problem.
  1. Defining a mixer Hamiltonian

HM

.
  1. Defining the oracles

UC(\gamma)=\exp(-\imath\gammaHC)

and

UM(\alpha)=\exp(-\imath\alphaHM)

, with parameters

\gamma

and α.
  1. Repeated application of the oracles

UC

and

UM

, in the order:

U(\boldsymbol\gamma,\boldsymbol\alpha)=

N
\coprod
i=1

(UC(\gammai)UM(\alphai))

  1. Preparing an initial state, that is a superposition of all possible states and apply

U(\boldsymbol\gamma,\boldsymbol\alpha)

to the state.
  1. Using classical methods to optimize the parameters

\boldsymbol\gamma,\boldsymbol\alpha

and measure the output state of the optimized circuit to obtain the approximate optimal solution to the cost Hamiltonian. An optimal solution will be one that maximizes the expectation value of the cost Hamiltonian

HC

.The layout of the algorithm, viz, the use of cost and mixer Hamiltonians are inspired from the Quantum Adiabatic theorem, which states that starting in a ground state of a time-dependent Hamiltonian, if the Hamiltonian evolves slowly enough, the final state will be a ground state of the final Hamiltonian. Moreover, the adiabatic theorem can be generalized to any other eigenstate as long as there is no overlap (degeneracy) between different eigenstates across the evolution. Identifying the initial Hamiltonian with

HM

and the final Hamiltonian with

HC

, whose ground states encode the solution to the optimization problem of interest, one can approximate the optimization problem as the adiabatic evolution of the Hamiltonian from an initial to the final one, whose ground (eigen) stated gives the optimal solutiond. In general, QAOA relies on the use of unitary operators dependent on

2p

angles (parameters), where

p>1

is an input integer, which can be identified the number of layers of the oracle

U(\boldsymbol\gamma,\boldsymbol\alpha)

. These operators are iteratively applied on a state that is an equal-weighted quantum superposition of all the possible states in the computational basis. In each iteration, the state is measured in the computational basis and the boolean function

C(z)

is estimated. The angles are then updated classically to increase

C(z)

. After this procedure is repeated a sufficient number of times, the value of

C(z)

is almost optimal, and the state being measured is close to being optimal as well. A sample circuit that implements QAOA on a quantum computer is given in figure. This procedure is highlighted using the following example of finding the minimum vertex cover of a graph.[9]

QAOA for finding the minimum vertex cover of a graph

The goal here is to find a minimum vertex cover of a graph: a collection of vertices such that each edge in the graph contains at least one of the vertices in the cover. Hence, these vertices “cover” all the edges. We wish to find a vertex cover that has the smallest possible number of vertices. Vertex covers can be represented by a bit string where each bit denotes whether the corresponding vertex is present in the cover. For example, the bit string 0101 represents a cover consisting of the second and fourth vertex in a graph with four vertices.Consider the graph given in the figure. It has four vertices and there are two minimum vertex cover for this graph: vertices 0 and 2, and the vertices 1 and 2. These can be respectively represented by the bit strings 1010 and 0110. The goal of the algorithm is to sample these bit strings with high probability. In this case, the cost Hamiltonian has two ground states, |1010⟩ and |0110⟩, coinciding with the solutions of the problem. The mixer Hamiltonian is the simple, non-commuting sum of Pauli-X operations on each node of the graph and they are given by:

HC=-0.25Z3+0.5Z0+0.5Z1+1.25Z2+0.75(Z0Z1+Z0Z2+Z2Z3+Z1Z2)

HM=X0+X1+X2+X3

Notes and References

  1. Moll. Nikolaj. Barkoutsos. Panagiotis. Bishop. Lev S.. Chow. Jerry M.. Cross. Andrew. Egger. Daniel J.. Filipp. Stefan. Fuhrer. Andreas. Gambetta. Jay M.. Ganzhorn. Marc. Kandala. Abhinav. Mezzacapo. Antonio. Müller. Peter. Riess. Walter. Salis. Gian. Smolin. John. Tavernelli. Ivano. Temme. Kristan. Quantum optimization using variational algorithms on near-term quantum devices. Quantum Science and Technology. 2018. 3. 3. 030503. 10.1088/2058-9565/aab822. 1710.01022. 2018QS&T....3c0503M. 56376912.
  2. Wiebe. Nathan. Braun. Daniel. Lloyd. Seth. Quantum Algorithm for Data Fitting. Physical Review Letters. 2 August 2012. 109. 5. 050505. 1204.5242. 10.1103/PhysRevLett.109.050505. 23006156. 2012PhRvL.109e0505W. 118439810 .
  3. Montanaro. Ashley. Quantum algorithms: an overview. . 12 January 2016. 2. 15023. 1511.04206. 10.1038/npjqi.2015.23. 2016npjQI...215023M. 2992738.
  4. An exact duality theory for semidefinite programming and its complexity implications. 10.1007/BF02614433. 1997. Ramana. Motakuri V.. Mathematical Programming. 77. 129–162. 12886462.
  5. Brandao. Fernando G. S. L.. Svore. Krysta. Krysta Svore . Quantum Speed-ups for Semidefinite Programming. 1609.05537. quant-ph. 2016.
  6. Farhi. Edward. Goldstone. Jeffrey. Gutmann. Sam. A Quantum Approximate Optimization Algorithm. 1411.4028. quant-ph. 2014.
  7. Farhi. Edward. Goldstone. Jeffrey. Gutmann. Sam. A Quantum Approximate Optimization Algorithm Applied to a Bounded Occurrence Constraint Problem. 1412.6062. quant-ph. 2014.
  8. Barak. Boaz. Moitra. Ankur. O'Donnell. Ryan. Ryan O'Donnell (computer scientist) . Raghavendra. Prasad. Regev. Oded. Steurer. David. Trevisan. Luca. Vijayaraghavan. Aravindan. Witmer. David. Wright. John. Beating the random assignment on constraint satisfaction problems of bounded degree. 1505.03424. cs.CC. 2015.
  9. Ceroni . Jack . 2020-11-18 . Intro to QAOA . PennyLane Demos . en.