The conjugate residual method is an iterative numeric method used for solving systems of linear equations. It's a Krylov subspace method very similar to the much more popular conjugate gradient method, with similar construction and convergence properties.
This method is used to solve linear equations of the form
Ax=b
where A is an invertible and Hermitian matrix, and b is nonzero.
The conjugate residual method differs from the closely related conjugate gradient method. It involves more numerical operations and requires more storage.
Given an (arbitrary) initial estimate of the solution
x0
\begin{align} &x0:=Someinitialguess\\ &r0:=b-Ax0\\ &p0:=r0\\ &Iterate,withkstartingat0:\\ & \alphak:=
| ||||||||||
|
\\ & xk+1:=xk+\alphakpk\\ & rk+1:=rk-\alphakApk\\ & \betak:=
| ||||||||||
|
\\ & pk+1:=rk+1+\betakpk\\ & Apk:=Ark+1+\betakApk\\ & k:=k+1\end{align}
the iteration may be stopped once
xk
\alphak
\betak
Apk
Note: the above algorithm can be transformed so to make only one symmetric matrix-vector multiplication in each iteration.
By making a few substitutions and variable changes, a preconditioned conjugate residual method may be derived in the same way as done for the conjugate gradient method:
\begin{align} &x0:=Someinitialguess\\ &r0:=M-1(b-Ax0)\\ &p0:=r0\\ &Iterate,withkstartingat0:\\ & \alphak:=
| ||||||||||
|
\\ & xk+1:=xk+\alphakpk\\ & rk+1:=rk-\alphakM-1Apk\\ & \betak:=
| ||||||||||
|
\\ & pk+1:=rk+1+\betakpk\\ & Apk:=Ark+1+\betakApk\\ & k:=k+1\\ \end{align}
M-1