Modified Richardson iteration is an iterative method for solving a system of linear equations. Richardson iteration was proposed by Lewis Fry Richardson in his work dated 1910. It is similar to the Jacobi and Gauss–Seidel method.
We seek the solution to a set of linear equations, expressed in matrix terms as
Ax=b.
The Richardson iteration is
x(k+1)=x(k)+\omega\left(b-Ax(k)\right),
where
\omega
x(k)
It is easy to see that the method has the correct fixed points, because if it converges, then
x(k+1) ≈ x(k)
x(k)
Ax=b
Subtracting the exact solution
x
e(k)=x(k)-x
e(k+1)=e(k)-\omegaAe(k)=(I-\omegaA)e(k).
Thus,
\|e(k+1)\|=\|(I-\omegaA)e(k)\|\leq\|I-\omegaA\|\|e(k)\|,
for any vector norm and the corresponding induced matrix norm. Thus, if
\|I-\omegaA\|<1
Suppose that
A
(λj)j
A
0
|1-\omegaλj|<1
λj
\omega
0<\omega<\omegamax, \omegamax:=2/λmax(A)
|1-\omegaλj|
\omegaopt:=2/(λmin(A)+λmax(A))
min | |
\omega\in(0,\omegamax) |
\rho(I-\omegaA) =\rho(I-\omegaoptA) =1-
2 | |
\kappa(A)+1 |
,
\kappa(A)
If there are both positive and negative eigenvalues, the method will diverge for any
\omega
e(0)
Consider minimizing the function
F(x)=
1 | |
2 |
2 | |
\|\tilde{A}x-\tilde{b}\| | |
2 |
\nablaF(x)=0
\tilde{A}T\tilde{A}x=\tilde{A}T\tilde{b}.
Define
A=\tilde{A}T\tilde{A}
b=\tilde{A}T\tilde{b}
A step of gradient descent is
x(k+1)=x(k)-t\nablaF(x(k))=x(k)-t(Ax(k)-b)
t=\omega