Time-evolving block decimation explained

The time-evolving block decimation (TEBD) algorithm is a numerical scheme used to simulate one-dimensional quantum many-body systems, characterized by at most nearest-neighbour interactions. It is dubbed Time-evolving Block Decimation because it dynamically identifies the relevant low-dimensional Hilbert subspaces of an exponentially larger original Hilbert space. The algorithm, based on the Matrix Product States formalism, is highly efficient when the amount of entanglement in the system is limited, a requirement fulfilled by a large class of quantum many-body systems in one dimension.

Introduction

Considering the inherent difficulties of simulating general quantum many-body systems, the exponential increase in parameters with the size of the system, and correspondingly, the high computational costs, one solution would be to look for numerical methods that deal with special cases, where one can profit from the physics of the system. The raw approach, by directly dealing with all the parameters used to fully characterize a quantum many-body system is seriously impeded by the lavishly exponential buildup with the system size of the amount of variables needed for simulation, which leads, in the best cases, to unreasonably long computational times and extended use of memory. To get around this problem a number of various methods have been developed and put into practice in the course of time, one of the most successful ones being the quantum Monte Carlo method (QMC). Also the density matrix renormalization group (DMRG) method, next to QMC, is a very reliable method, with an expanding community of users and an increasing number of applications to physical systems.

When the first quantum computer is plugged in and functioning, the perspectives for the field of computational physics will look rather promising, but until that day one has to restrict oneself to the mundane tools offered by classical computers. While experimental physicists are putting a lot of effort in trying to build the first quantum computer, theoretical physicists are searching, in the field of quantum information theory (QIT), for genuine quantum algorithms, appropriate for problems that would perform badly when trying to be solved on a classical computer, but pretty fast and successful on a quantum one. The search for such algorithms is still going, the best-known (and almost the only ones found) being the Shor's algorithm, for factoring large numbers, and Grover's search algorithm.

In the field of QIT one has to identify the primary resources necessary for genuine quantum computation. Such a resource may be responsible for the speedup gain in quantum versus classical, identifying them means also identifying systems that can be simulated in a reasonably efficient manner on a classical computer. Such a resource is quantum entanglement; hence, it is possible to establish a distinct lower bound for the entanglement needed for quantum computational speedups.

Guifré Vidal, then at the Institute for Quantum Information, Caltech, has recently proposed a scheme useful for simulating a certain category of quantum[1] systems. He asserts that "any quantum computation with pure states can be efficiently simulated with a classical computer provided the amount of entanglement involved is sufficiently restricted".This happens to be the case with generic Hamiltonians displaying local interactions, as for example, Hubbard-like Hamiltonians. The method exhibits a low-degree polynomial behavior in the increase of computational time with respect to the amount of entanglement present in the system. The algorithm is based on a scheme that exploits the fact that in these one-dimensional systems the eigenvalues of the reduced density matrix on a bipartite split of the system are exponentially decaying, thus allowing us to work in a re-sized space spanned by the eigenvectors corresponding to the eigenvalues we selected.

One can also estimate the amount of computational resources required for the simulation of a quantum system on a classical computer, knowing how the entanglement contained in the system scales with the size of the system. The classically (and quantum, as well) feasible simulations are those that involve systems only lightly entangled—the strongly entangled ones being, on the other hand, good candidates only for genuine quantum computations.

The numerical method is efficient in simulating real-time dynamics or calculations of ground states using imaginary-time evolution or isentropic interpolations between a target Hamiltonian and a Hamiltonian with an already-known ground state. The computational time scales linearly with the system size, hence many-particles systems in 1D can be investigated.

A useful feature of the TEBD algorithm is that it can be reliably employed for time evolution simulations of time-dependent Hamiltonians, describing systems that can be realized with cold atoms in optical lattices, or in systems far from equilibrium in quantum transport. From this point of view, TEBD had a certain ascendance over DMRG, a very powerful technique, but until recently not very well suited for simulating time-evolutions. With the Matrix Product States formalism being at the mathematical heart of DMRG, the TEBD scheme was adopted by the DMRG community, thus giving birth to the time dependent DMRG http://www.citebase.org/cgi-bin/citations?id=oai:arXiv.org:cond-mat/0403313, t-DMRG for short.

Around the same time, other groups have developed similar approaches in which quantum information plays a predominant role as, for example, in DMRG implementations for periodic boundary conditions https://arxiv.org/abs/cond-mat/0404706, and for studying mixed-state dynamics in one-dimensional quantum lattice systems,.[2] [3] Those last approaches actually provide a formalism that is more general than the original TEBD approach, as it also allows to deal with evolutions with matrix product operators; this enables the simulation of nontrivial non-infinitesimal evolutions as opposed to the TEBD case, and is a crucial ingredient to deal with higher-dimensional analogues of matrix product states.

The decomposition of state

Introducing the decomposition of State

Consider a chain of N qubits, described by the function

|\Psi\rangle\inH{N}

. The most natural way of describing

|\Psi\rangle

would be using the local

MN

-dimensional basis

|i1,i2,..,iN-1,iN\rangle

:| \Psi \rangle=\sum\limits_^c_ | \ranglewhere M is the on-site dimension.

The trick of TEBD is to re-write the coefficients

c
i1i2..iN
: c_=\sum\limits_^\Gamma^_\lambda^_\Gamma^_\lambda^_\Gamma^_\lambda^_\cdot..\cdot\Gamma^_\lambda^_\Gamma^_

This form, known as a Matrix product state, simplifies the calculations greatly.

To understand why, one can look at the Schmidt decomposition of a state, which uses singular value decomposition to express a state with limited entanglement more simply.

The Schmidt decomposition

Consider the state of a bipartite system

\vert\Psi\rangle\in{HAHB}

. Every such state

|{\Psi}\rangle

can be represented in an appropriately chosen basis as:\left\vert \Psi \right\rangle = \sum\limits_^ a_i \left\vert\right\ranglewhere
A
|{\Phi
i
B
\Phi
i}\rangle=|
A
{\Phi
i}\rangle

|

B
{\Phi
i}\rangle
are formed with vectors
A
|{\Phi
i}\rangle
that make an orthonormal basis in

HA

and, correspondingly, vectors
B
|{\Phi
i}\rangle
, which form an orthonormal basis in

{HB}

, with the coefficients

ai

being real and positive, \sum\limits_^a^2_i = 1. This is called the Schmidt decomposition (SD) of a state. In general the summation goes up to

MA|B=min(\dim({{HA}}),\dim({{HB}}))

. The Schmidt rank of a bipartite split is given by the number of non-zero Schmidt coefficients. If the Schmidt rank is one, the split is characterized by a product state. The vectors of the SD are determined up to a phase and the eigenvalues and the Schmidt rank are unique.

For example, the two-qubit state:| \rangle=\frac\left(| \rangle + | \rangle + | \rangle + |\rangle\right)has the following SD:\left|\right\rangle = \frac \left|\right\rangle + \frac \left|\right\ranglewith|\rangle=\frac(|\rangle+|\rangle), \ \ |\rangle=\frac(|\rangle+|\rangle), \ \ |\rangle=\frac(|\rangle-|\rangle), \ \ |\rangle=\frac(|\rangle-|\rangle)

On the other hand, the state:|\rangle =\frac|\rangle + \frac|\rangle- \frac|\rangle - \frac|\rangleis a product state:\left|\Phi\right\rangle = \left(\frac \left|0_A\right\rangle - \frac \left|1_A\right\rangle \right) \otimes \left(\left|0_B\right\rangle + \frac \left|1_B\right\rangle \right)

Building the decomposition of state

At this point we know enough to try to see how we explicitly build the decomposition (let's call it D).

Consider the bipartite splitting

[1]:[2..N]

. The SD has the coefficients
[1]
λ
{\alpha

1}

and eigenvectors
[1]
\left|{\Phi
\alpha1
}\right\rangle \left|\right\rangle.By expanding the
[1]
\left|{\Phi
\alpha1
}\right\rangle's in the local basis, one can write:

|\rangle=\sum\limits_^\Gamma^_\lambda^_|\rangle|\rangle

The process can be decomposed in three steps, iterated for each bond (and, correspondingly, SD) in the chain:Step 1: express the

[2..N]
|{\Phi
\alpha1
}\rangle's in a local basis for qubit 2: |\rangle=\sum_|\rangle|\rangle

The vectors

[3..N]
|{\tau
\alpha1i2
}\rangle are not necessarily normalized.

Step 2: write each vector

[3..N]
|{\tau
\alpha1i2
}\rangle in terms of the at most (Vidal's emphasis)

\chi

Schmidt vectors
[3..N]
|{\Phi
\alpha2
}\rangle and, correspondingly, coefficients
[2]
λ
{\alpha

2}

:|\tau^_\rangle=\sum_\Gamma^_\lambda^_|\rangle

Step 3: make the substitutions and obtain:|\rangle=\sum_\Gamma^_\lambda^_\Gamma^_\lambda^_|\rangle|\rangle

Repeating the steps 1 to 3, one can construct the whole decomposition of state D. The last

\Gamma

's are a special case, like the first ones, expressing the right-hand Schmidt vectors at the

(N-1)th

bond in terms of the local basis at the

Nth

lattice place. As shown in,[1] it is straightforward to obtain the Schmidt decomposition at

kth

bond, i.e.

[1..k]:[k+1..N]

, from D.

The Schmidt eigenvalues, are given explicitly in D:

|\rangle=\sum_\lambda^_|\rangle|\rangle

The Schmidt eigenvectors are simply:

|\rangle=\sum_\Gamma^_\lambda^_\cdot\cdot\Gamma^_|\rangleand

|\rangle=\sum_\Gamma^_\lambda^_\cdot\cdot\lambda^_\Gamma^_|\rangle

Rationale

Now, looking at D, instead of

MN

initial terms, there are

{\chi}2{}M(N-2)+2{\chi}M+(N-1)\chi

. Apparently this is just a fancy way of rewriting the coefficients
c
i1i2..iN
, but in fact there is more to it than that. Assuming that N is even, the Schmidt rank

\chi

for a bipartite cut in the middle of the chain can have a maximal value of

MN/2

; in this case we end up with at least

MN+1{}(N-2)

coefficients, considering only the

{\chi}2

ones, slightly more than the initial

MN

! The truth is that the decomposition D is useful when dealing with systems that exhibit a low degree of entanglement, which fortunately is the case with many 1D systems, where the Schmidt coefficients of the ground state decay in an exponential manner with

\alpha

:

\lambda^_e^,\ K>0.

Therefore, it is possible to take into account only some of the Schmidt coefficients (namely the largest ones), dropping the others and consequently normalizing again the state:

|\rangle=\frac\cdot\sum\limits_^\lambda^_|\rangle|\rangle,

where

\chic

is the number of kept Schmidt coefficients.

Let's get away from this abstract picture and refresh ourselves with a concrete example, to emphasize the advantage of making this decomposition. Consider for instance the case of 50 fermions in a ferromagnetic chain, for the sake of simplicity. A dimension of 12, let's say, for the

\chic

would be a reasonable choice, keeping the discarded eigenvalues at

0.0001

% of the total, as shown by numerical studies,[4] meaning roughly

214

coefficients, as compared to the originally

250

ones.

Even if the Schmidt eigenvalues don't have this exponential decay, but they show an algebraic decrease, we can still use D to describe our state

\psi

. The number of coefficients to account for a faithful description of

\psi

may be sensibly larger, but still within reach of eventual numerical simulations.

The update of the decomposition

One can proceed now to investigate the behaviour of the decomposition D when acted upon with one-qubit gates (OQG) and two-qubit gates (TQG) acting on neighbouring qubits. Instead of updating all the

MN

coefficients
c
i1i2..iN
, we will restrict ourselves to a number of operations that increase in

\chi

as a polynomial of low degree, thus saving computational time.

One-qubit gates acting on qubit k

The OQGs are affecting only the qubit they are acting upon, the update of the state

|{\psi}\rangle

after a unitary operator at qubit k does not modify the Schmidt eigenvalues or vectors on the left, consequently the

\Gamma[k-1]

's, or on the right, hence the

\Gamma[k+1]

's. The only

\Gamma

's that will be updated are the

\Gamma[k]

's (requiring only at most

{{O}}(M2 ⋅ \chi2)

operations), as

\Gamma^_=\sum_U^_\Gamma^_.

Two-qubit gates acting on qubits k, k+1

The changes required to update the

\Gamma

's and the

λ

's, following a unitary operation V on qubits k, k+1, concern only

\Gamma[k]

, and

\Gamma[k+1]

.They consist of a number of

{{O}}({M\chi}3)

basic operations.

Following Vidal's original approach,

|{\psi}\rangle

can be regarded as belonging to only four subsystems:

.\,

The subspace J is spanned by the eigenvectors of the reduced density matrix

\rhoJ=TrCDK|\psi\rangle\langle\psi|

:

\rho^=\sum_^2|\rangle\langle|=\sum_|\rangle\langle|.

In a similar way, the subspace K is spanned by the eigenvectors of the reduced density matrix:

\rho^=\sum_|\rangle\langle|=\sum_|\rangle\langle|.

The subspaces

HC

and

HD

belong to the qubits k and k + 1.Using this basis and the decomposition D,

|{\psi}\rangle

can be written as:

|\rangle=\sum\limits_^\sum\limits_^\lambda^_\Gamma^_\lambda^_\Gamma^_\lambda^_|\rangle

Using the same reasoning as for the OQG, the applying the TQG V to qubits k, k + 1 one needs only to update

\Gamma[C]

,

λ

and

\Gamma[D].

We can write

|{\psi'}\rangle=V|{\psi}\rangle

as:|\rangle=\sum\limits_^\sum\limits_^\lambda_\Theta^_\lambda_|\ranglewhere\Theta^_=\sum\limits_^\sum\limits_^V^_\Gamma^_\lambda_\Gamma^_.

To find out the new decomposition, the new

λ

's at the bond k and their corresponding Schmidt eigenvectors must be computed and expressed in terms of the

{{\Gamma}}

's of the decomposition D. The reduced density matrix

\rho'[DK]

is therefore diagonalized:\rho^=Tr_|\rangle\langle|=\sum_\rho^_|\rangle\langle|.

The square roots of its eigenvalues are the new

λ

's.Expressing the eigenvectors of the diagonalized matrix in the basis:

\{|{j\gamma}\rangle\}

the

\Gamma[{{D]

}}'s are obtained as well:|\rangle=\sum_\Gamma^_\lambda_|\rangle.

From the left-hand eigenvectors,\lambda^_|\rangle=\langle|\rangle=\sum_(\Gamma^_)^\Theta^_(\lambda_)^2\lambda_|\rangleafter expressing them in the basis

\{|{i\alpha}\rangle\}

, the

\Gamma[{C]}

's are:|\rangle=\sum_\Gamma^_\lambda_|\rangle.

The computational cost

The dimension of the largest tensors in D is of the order

{{O}}(M{}{\chi}2)

; when constructing the
ij
\Theta
\alpha\gamma
one makes the summation over

\beta

,

\it{m}

and

\it{n}

for each

\gamma,\alpha,{\it{i,j}}

, adding up to a total of

{{O}}(M4{}{\chi}3)

operations. The same holds for the formation of the elements

\rho{{jj'

}}_, or for computing the left-hand eigenvectors
'
λ
\beta

|{\Phi'[{\it{JC

}]}_}\rangle, a maximum of

{\it{O}}(M3{}{\chi}3)

, respectively

{\it{O}}(M2{}{\chi}3)

basic operations. In the case of qubits,

M=2

, hence its role is not very relevant for the order of magnitude of the number of basic operations, but in the case when the on-site dimension is higher than two it has a rather decisive contribution.

The numerical simulation

The numerical simulation is targeting (possibly time-dependent) Hamiltonians of a system of

N

particles arranged in a line, which are composed of arbitrary OQGs and TQGs:

H_N=\sum\limits_^K^_1 + \sum\limits_^K^_2.

It is useful to decompose

HN

as a sum of two possibly non-commuting terms,

HN=F+G

, where

F \equiv \sum_(K^_1 + K^_2) = \sum_F^,G \equiv \sum_(K^_1 + K^_2) = \sum_G^.

Any two-body terms commute:

[F[l],F[l']]=0

,

[G[l],G[l']]=0

This is done to make the Suzuki–Trotter expansion (ST)[5] of the exponential operator, named after Masuo Suzuki and Hale Trotter.

The Suzuki–Trotter expansion

The Suzuki–Trotter expansion of the first order (ST1) represents a general way of writing exponential operators: e^ = \lim_ \left(e^e^ \right)^nor, equivalentlye^ = e^e^ + (\delta^2).

The correction term vanishes in the limit

\delta\to0

For simulations of quantum dynamics it is useful to use operators that are unitary, conserving the norm (unlike power series expansions), and there's where the Trotter-Suzuki expansion comes in. In problems of quantum dynamics the unitarity of the operators in the ST expansion proves quite practical, since the error tends to concentrate in the overall phase, thus allowing us to faithfully compute expectation values and conserved quantities. Because the ST conserves the phase-space volume, it is also called a symplectic integrator.

The trick of the ST2 is to write the unitary operators

e-iHt

as:e^ = [e^{-iH_N\delta}]^ = [e^{\frac{{\delta}}{2}F}e^{{\delta}G}e^{\frac{{\delta}}{2}F}]^where
n=T
\delta
. The number

n

is called the Trotter number.

Simulation of the time-evolution

The operators

{\delta
e

{2}F}

,

e{\deltaG}

are easy to express, as:

e^ = \prod_e^e^ = \prod_e^

since any two operators

F[l]

,

F[l']

(respectively,

G[l]

,

G[l']

) commute for

l{}l'

and an ST expansion of the first order keeps only the product of the exponentials, the approximation becoming, in this case, exact.

The time-evolution can be made according to

|\rangle=e^e^e^|\rangle.

For each "time-step"

\delta

,
-i{\delta
e

{2}F[l]

} are applied successively to all odd sites, then

e{-i\deltaG[l]

} to the even ones, and
-i{\delta
e

{2}F[l]

} again to the odd ones; this is basically a sequence of TQG's, and it has been explained above how to update the decomposition

{\it{D}}

when applying them.

Our goal is to make the time evolution of a state

|{\psi0}\rangle

for a time T, towards the state

|{\psiT

}\rangle using the n-particle Hamiltonian

Hn

.

It is rather troublesome, if at all possible, to construct the decomposition

{\it{D}}

for an arbitrary n-particle state, since this would mean one has to compute the Schmidt decomposition at each bond, to arrange the Schmidt eigenvalues in decreasing order and to choose the first

\chic

and the appropriate Schmidt eigenvectors. Mind this would imply diagonalizing somewhat generous reduced density matrices, which, depending on the system one has to simulate, might be a task beyond our reach and patience.Instead, one can try to do the following:

Error sources

The errors in the simulation are resulting from the Suzuki–Trotter approximation and the involved truncation of the Hilbert space.

Errors coming from the Suzuki–Trotter expansion

In the case of a Trotter approximation of

{\it{pth

}} order, the error is of order

{\delta}p+1

. Taking into account

n=

T
\delta
steps, the error after the time T is: \epsilon=\frac\delta^=T\delta^p

The unapproximated state

|{\tilde{\psi}Tr

}\rangle is:

|\rangle = \sqrt|\rangle + |\rangle

where

|{\psiTr

}\rangle is the state kept after the Trotter expansion and
\bot
|{\psi
Tr
}\rangle accounts for the part that is neglected when doing the expansion.

The total error scales with time

T

as:\epsilon(T) = 1 -|\langle|\rangle|^2 = 1 - 1 + \epsilon^2 = \epsilon^2

The Trotter error is independent of the dimension of the chain.

Errors coming from the truncation of the Hilbert space

Considering the errors arising from the truncation of the Hilbert space comprised in the decomposition D, they are twofold.

First, as we have seen above, the smallest contributions to the Schmidt spectrum are left away, the state being faithfully represented up to:\epsilon = 1 - \prod\limits_^(1-\epsilon_n)where

\epsilonn=

\chi
\sum\limits
\alpha=\chic
[n]
(λ
\alpha

)2

is the sum of all the discarded eigenvalues of the reduced density matrix, at the bond

{\it{n}}

.The state

|{\psi}\rangle

is, at a given bond

{\it{n}}

, described by the Schmidt decomposition:|\rangle = \sqrt|\rangle + \sqrt|\ranglewhere|\rangle = \frac\sum\limits_^\lambda^_|\rangle|\rangleis the state kept after the truncation and|\rangle = \frac\sum\limits_^\lambda^_|\rangle|\rangleis the state formed by the eigenfunctions corresponding to the smallest, irrelevant Schmidt coefficients, which are neglected.Now,
\bot
\langle\psi
D

|\psiD\rangle=0

because they are spanned by vectors corresponding to orthogonal spaces. Using the same argument as for the Trotter expansion, the error after the truncation is:\epsilon_n = 1 - |\langle|\psi_\rangle|^2 = \sum\limits_^(\lambda^_)^2

After moving to the next bond, the state is, similarly:|\rangle = \sqrt|\rangle + \sqrt|\rangleThe error, after the second truncation, is:\epsilon = 1 - |\langle|\psi'_\rangle|^2 = 1 - (1-\epsilon_)|\langle|\psi_\rangle|^2 = 1 - (1-\epsilon_)(1-\epsilon_)and so on, as we move from bond to bond.

The second error source enfolded in the decomposition

D

is more subtle and requires a little bit of calculation.

As we calculated before, the normalization constant after making the truncation at bond

l

([1..l]:[l+1..N])

is: R = =

Now let us go to the bond

{\it{l}}-1

and calculate the norm of the right-hand Schmidt vectors
[l-1..N]
\|{\Phi
\alphal-1
}\|; taking into account the full Schmidt dimension, the norm is:n_1 = 1 = \sum\limits_^(c_)^2(\lambda^_)^2 + \sum\limits_^(c_)^2(\lambda^_)^2 = S_1 + S_2,

where

(c
\alphal-1\alphal

)2=

d
\sum\limits
il=1
[l]il
(\Gamma
\alphal-1\alphal

)*

[l]il
\Gamma
\alphal-1\alphal
.

Taking into account the truncated space, the norm is:n_=\sum\limits_^ (c_)^2\cdot(^_)^2=\sum\limits_^(c_)^2\frac = \frac

Taking the difference,

\epsilon=n2-n1=n2-1

, we get:\epsilon = \frac - 1 \leq \frac = \frac 0\ \ as\ \

Hence, when constructing the reduced density matrix, the trace of the matrix is multiplied by the factor:|\langle|\psi_\rangle|^2 = 1 - \frac = \frac

The total truncation error

The total truncation error, considering both sources, is upper bounded by:\epsilon = 1 - \prod\limits_^(1-\epsilon_n) \prod\limits_^\frac = 1 - \prod\limits_^(1-2\epsilon_n)

When using the Trotter expansion, we do not move from bond to bond, but between bonds of same parity; moreover, for the ST2, we make a sweep of the even ones and two for the odd. But nevertheless, the calculation presented above still holds. The error is evaluated by successively multiplying with the normalization constant, each time we build the reduced density matrix and select its relevant eigenvalues.

"Adaptive" Schmidt dimension

One thing that can save a lot of computational time without loss of accuracy is to use a different Schmidt dimension for each bond instead of a fixed one for all bonds, keeping only the necessary amount of relevant coefficients, as usual. For example, taking the first bond, in the case of qubits, the Schmidt dimension is just two. Hence, at the first bond, instead of futilely diagonalizing, let us say, 10 by 10 or 20 by 20 matrices, we can just restrict ourselves to ordinary 2 by 2 ones, thus making the algorithm generally faster. What we can do instead is set a threshold for the eigenvalues of the SD, keeping only those that are above the threshold.

TEBD also offers the possibility of straightforward parallelization due to the factorization of the exponential time-evolution operator using the Suzuki–Trotter expansion. A parallel-TEBD has the same mathematics as its non-parallelized counterpart, the only difference is in the numerical implementation.

References

  1. Vidal . Guifré . Guifré Vidal. Efficient Classical Simulation of Slightly Entangled Quantum Computations . Physical Review Letters . 91 . 14 . 2003-10-01 . 0031-9007 . 10.1103/physrevlett.91.147902 . 147902. 14611555 . quant-ph/0301063. 15188855 .
  2. F. Verstraete . J. J. Garcia-Ripoll . J. I. Cirac . 2004 . Matrix Product Density Operators: Simulation of finite-T and dissipative systems . Phys. Rev. Lett. . 93 . 20 . 207204 . cond-mat/0406426 . 2004PhRvL..93t7204V . 10.1103/PhysRevLett.93.207204 . 15600964 . 36218923. https://arxiv.org/abs/cond-mat/0406426
  3. 10.1103/PhysRevLett.93.207205 . M. Zwolak . G. Vidal . Mixed-state dynamics in one-dimensional quantum lattice systems: a time-dependent superoperator renormalization algorithm . Phys. Rev. Lett. . 93 . 20 . 207205 . 2004 . 15600965 . 2004PhRvL..93t7205Z. cond-mat/0406440 . 26736344 .
  4. Vidal . Guifré . Efficient Simulation of One-Dimensional Quantum Many-Body Systems . Physical Review Letters . 93 . 4 . 2004-07-19 . 0031-9007 . 10.1103/physrevlett.93.040502 . 040502. 15323740 . quant-ph/0310089. 30670203 .
  5. Book: Hatano . Naomichi . Suzuki . Masuo . Quantum Annealing and Other Optimization Methods . Finding Exponential Product Formulas of Higher Orders . Springer Berlin Heidelberg . Berlin, Heidelberg . 2005-11-16 . 978-3-540-27987-7 . 0075-8450 . 10.1007/11526216_2 . 37–68. math-ph/0506007v1. 118378501 .