Iterative refinement is an iterative method proposed by James H. Wilkinson to improve the accuracy of numerical solutions to systems of linear equations.[1] [2]
When solving a linear system
Ax=b,
\hat{x
x\star.
x1=\hat{x
\{x1,x2,x3,...\}
x\star,
For
m=1,2,3,...,
The crucial reasoning for the refinement algorithm is that although the solution for in step (ii) may indeed be troubled by similar errors as the first solution,
\hatx
x\star.
The iterations will stop on their own when the residual is zero, or close enough to zero that the corresponding correction is too small to change the solution which produced it; alternatively, the algorithm stops when is too small to convince the linear algebraist monitoring the progress that it is worth continuing with any further refinements.
Note that the matrix equation solved in step (ii) uses the same matrix
A
A
As a rule of thumb, iterative refinement for Gaussian elimination produces a solution correct to working precision if double the working precision is used in the computation of, e.g. by using quad or double extended precision IEEE 754 floating point, and if is not too ill-conditioned (and the iteration and the rate of convergence are determined by the condition number of).[3]
More formally, assuming that each step (ii) can be solved reasonably accurately, i.e., in mathematical terms, for every, we have
where, the relative error in the -th iterate of iterative refinement satisfies
where
if is "not too badly conditioned", which in this context means
and implies that and are of order unity.
The distinction of and is intended to allow mixed-precision evaluation of where intermediate results are computed with unit round-off before the final result is rounded (or truncated) with unit round-off . All other computations are assumed to be carried out with unit round-off .