DIIS (direct inversion in the iterative subspace or direct inversion of the iterative subspace), also known as Pulay mixing, is a technique for extrapolating the solution to a set of linear equations by directly minimizing an error residual (e.g. a Newton–Raphson step size) with respect to a linear combination of known sample vectors. DIIS was developed by Peter Pulay in the field of computational quantum chemistry with the intent to accelerate and stabilize the convergence of the Hartree–Fock self-consistent field method.[1] [2] [3]
At a given iteration, the approach constructs a linear combination of approximate error vectors from previous iterations. The coefficients of the linear combination are determined so to best approximate, in a least squares sense, the null vector. The newly determined coefficients are then used to extrapolate the function variable for the next iteration.
At each iteration, an approximate error vector,, corresponding to the variable value, is determined. After sufficient iterations, a linear combination of previous error vectors is constructed
em+1
m c | |
=\sum | |
ie |
i.
The DIIS method seeks to minimize the norm of under the constraint that the coefficients sum to one. The reason why the coefficients must sum to one can be seen if we write the trial vector as the sum of the exact solution and an error vector. In the DIIS approximation, we get:
\begin{align} p&=\sumici\left(pf+ei\right)\\ &=pf\sumici+\sumiciei \end{align}
\begin{align} L&=\left\|em+1
2-2λ\left(\sum | |
\right\| | |
i c |
i-1\right),\\ &=\sumijcjBjici-2λ\left(\sumi ci-1\right),whereBij=\langleej,ei\rangle. \end{align}
Equating zero to the derivatives of with respect to the coefficients and the multiplier leads to a system of linear equations to be solved for the coefficients (and the Lagrange multiplier).
\begin{bmatrix}B11&B12&B13&...&B1m&-1\\ B21&B22&B23&...&B2m&-1\ B31&B32&B33&...&B3m&-1\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots\\ Bm1&Bm2&Bm3&...&Bmm&-1\\ 1&1&1&...&1&0 \end{bmatrix}\begin{bmatrix}c1\ c2\ c3\ \vdots\ cm\ λ\end{bmatrix}= \begin{bmatrix}0\ 0\ 0\ \vdots\ 0\ 1\end{bmatrix}
Moving the minus sign to, results in an equivalent symmetric problem.
\begin{bmatrix}B11&B12&B13&...&B1m&1\\ B21&B22&B23&...&B2m&1\ B31&B32&B33&...&B3m&1\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots\\ Bm1&Bm2&Bm3&...&Bmm&1\\ 1&1&1&...&1&0 \end{bmatrix}\begin{bmatrix}c1\ c2\ c3\ \vdots\ cm\ -λ\end{bmatrix}= \begin{bmatrix}0\ 0\ 0\ \vdots\ 0\ 1\end{bmatrix}
pm+1
m | |
=\sum | |
i=1 |
cipi.