In mathematics, more specifically in numerical linear algebra, the biconjugate gradient method is an algorithm to solve systems of linear equations
Ax=b.
A
x0
* | |
x | |
0 |
b*
M
r0\leftarrowb-Ax0
* | |
r | |
0 |
\leftarrow
* | |
b | |
0 |
A*
p0\leftarrowM-1r0
* | |
p | |
0 |
\leftarrow
*M | |
r | |
0 |
-1
k=0,1,\ldots
\alphak\leftarrow
* | |
{r | |
k |
M-1rk\over
* | |
p | |
k |
Apk}
xk+1\leftarrowxk+\alphak ⋅ pk
* | |
x | |
k+1 |
\leftarrow
* | |
x | |
k |
+\overline{\alphak} ⋅
* | |
p | |
k |
rk+1\leftarrowrk-\alphak ⋅ Apk
* | |
r | |
k+1 |
\leftarrow
*- | |
r | |
k |
\overline{\alphak} ⋅
* | |
p | |
k |
A*
\betak\leftarrow
* | |
{r | |
k+1 |
M-1rk+1\over
* | |
r | |
k |
M-1rk}
pk+1\leftarrowM-1rk+1+\betak ⋅ pk
* | |
p | |
k+1 |
\leftarrow
*M | |
r | |
k+1 |
-1+\overline{\betak} ⋅
* | |
p | |
k |
In the above formulation, the computed
rk
* | |
r | |
k |
rk=b-Axk,
* | |
r | |
k |
=b*-
* | |
x | |
k |
A*
and thus are the respective residuals corresponding to
xk
* | |
x | |
k |
Ax=b,
x*A*=b*;
x*
\overline{\alpha}
x0
r0\leftarrowb-Ax0
\hat{r}0\leftarrow\hat{b}-
* | |
\hat{x} | |
0A |
p0\leftarrowr0
\hat{p}0\leftarrow\hat{r}0
k=0,1,\ldots
\alphak\leftarrow{\hat{r}krk\over\hat{p}kApk}
xk+1\leftarrowxk+\alphak ⋅ pk
\hat{x}k+1\leftarrow\hat{x}k+\alphak ⋅ \hat{p}k
rk+1\leftarrowrk-\alphak ⋅ Apk
\hat{r}k+1\leftarrow\hat{r}k-\alphak ⋅ \hat{p}kA*
\betak\leftarrow{\hat{r}k+1rk+1\over\hat{r}krk}
pk+1\leftarrowrk+1+\betak ⋅ pk
\hat{p}k+1\leftarrow\hat{r}k+1+\betak ⋅ \hat{p}k
The biconjugate gradient method is numerically unstable (compare to the biconjugate gradient stabilized method), but very important from a theoretical point of view. Define the iteration steps by
xk:=xj+PkA-1\left(b-Axj\right),
*:= | |
x | |
k |
*+\left(b | |
x | |
j |
*-
* | |
x | |
j |
A\right)PkA-1,
where
j<k
Pk:=uk
* | |
\left(v | |
k |
Auk\right)-1
* | |
v | |
k |
A,
with
uk=\left[u0,u1,...,uk-1\right],
vk=\left[v0,v1,...,vk-1\right].
These related projections may be iterated themselves as
Pk+1=Pk+\left(1-Pk\right)uk ⊗
* | |
{v | |
k |
A\left(1-Pk\right)\over
* | |
v | |
k |
A\left(1-Pk\right)uk}.
A relation to Quasi-Newton methods is given by
Pk=
-1 | |
A | |
k |
A
xk+1=xk-
-1 | |
A | |
k+1 |
\left(Axk-b\right)
-1 | |
A | |
k+1 |
=
-1 | |
A | |
k |
+\left(
-1 | |
1-A | |
k |
A\right)uk ⊗
* | |
{v | |
k |
\left(1-A
-1 | |
A | |
k |
\right)\over
* | |
v | |
k |
-1 | |
A\left(1-A | |
k |
A\right)uk}.
The new directions
pk=\left(1-Pk\right)uk,
* | |
p | |
k |
=
* | |
v | |
k |
A\left(1-Pk\right)A-1
are then orthogonal to the residuals:
* | |
v | |
i |
rk=
* | |
p | |
i |
rk=0,
* | |
r | |
k |
uj=
* | |
r | |
k |
pj=0,
which themselves satisfy
rk=A\left(1-Pk\right)A-1rj,
*= | |
r | |
k |
* | |
r | |
j |
\left(1-Pk\right)
where
i,j<k
The biconjugate gradient method now makes a special choice and uses the setting
uk=M-1rk,
* | |
v | |
k |
=
* | |
r | |
k |
M-1.
With this particular choice, explicit evaluations of
Pk
A=A*
*= | |
x | |
0 |
x0
b*=b
rk=
* | |
r | |
k |
pk=
* | |
p | |
k |
xk=
* | |
x | |
k |
*Ap | |
p | |
j=r |
*M | |
i |
-1rj=0
i ≠ j
Pj'
\deg\left(Pj'\right)+j<k
*P | |
r | |
j' |
\left(M-1A\right)uj=0
Pi'
i+\deg\left(Pi'\right)<k
*P | |
v | |
i' |
\left(AM-1\right)rk=0