Aitken's delta-squared process explained

In numerical analysis, Aitken's delta-squared process or Aitken extrapolation is a series acceleration method, used for accelerating the rate of convergence of a sequence. It is named after Alexander Aitken, who introduced this method in 1926.[1] Its early form was known to Seki Kōwa (end of 17th century) and was found for rectification of the circle, i.e. the calculation of π. It is most useful for accelerating the convergence of a sequence that is converging linearly.

Definition

Given a sequence

X={(xn)}n\in\N

, one associates with this sequence the new sequenceA X = _,which can, with improved numerical stability, also be written as (A X)_n = x_n-\frac,or equivalently as(A X)_n = x_ - \frac = x_ - \fracwhere\Delta x_=,\ \Delta x_=,and\Delta^2 x_n=x_n -2x_ + x_=\Delta x_-\Delta x_,\ for

Obviously,

AX

is ill-defined if

\Delta2x

contains a zero element, or equivalently, if the sequence of first differences has a repeating term.

From a theoretical point of view, if that occurs only for a finite number of indices, one could easily agree to consider the sequence

AX

restricted to indices

n>n0

with a sufficiently large

n0

. From a practical point of view, one does in general rather consider only the first few terms of the sequence, which usually provide the needed precision. Moreover, when numerically computing the sequence, one has to take care to stop the computation when rounding errors in the denominator become too large, where the Δ2 operation may cancel too many significant digits. (It would be better for numerical calculation to use

\Deltaxn+1-\Deltaxn =(xn+2-xn+1)-(xn+1-xn)

rather than

Properties

Aitken's delta-squared process is a method of acceleration of convergence, and a particular case of a nonlinear sequence transformation.

Convergence of

\{xn

infty
\}
n=1
to limit

\ell

is called "linear" if there is some number for which \lim_ \frac
= \mu\ .Which means that the distance between the sequence and its limit shrinks by nearly the same proportion on every step, and that rate of reduction becomes closer to being constant with every step. (This is also called "geometric convergence"; this form of convergence is common for power series.)

Aitken's method will accelerate the sequence

xn

if \ \lim_\frac = 0\ .

A

is not a linear operator, but a constant term drops out, viz:

A[x-\ell]=Ax-\ell,

if

\ell

is a constant. This is clear from the expression of

Ax

in terms of the finite difference operator

\Delta.

Although the new process does not in general converge quadratically, it can be shown that for a fixed point process, that is, for an iterated function sequence

xn+1=f(xn)

for some function

f,

converging to a fixed point, the convergence is quadratic. In this case, the technique is known as Steffensen's method.

Empirically, the A-operation eliminates the "most important error term". One can check this by considering a sequence of the form

n+b
x
n=\ell+a

n

, where

0<b<a<1

:The sequence

Ax

will then go to the limit like

bn

goes to zero.

Geometrically, the graph of an exponential function

f(t)

that satisfies

f(n)=xn

,

f(n+1)=xn+1

and

f(n+2)=xn+2

has an horizontal asymptote at
xxn+2
2
-x
n+1
n
xn-2xn+1+xn+2
(if

xn-2xn+1+xn+20

).

One can also show that if

x

goes to its limit

\ell

at a rate strictly greater than 1,

Ax

does not have a better rate of convergence. (In practice, one rarely has e.g. quadratic convergence which would mean over 30 resp. 100 correct decimal places after 5 resp. 7 iterations (starting with 1 correct digit); usually no acceleration is needed in that case.)

In practice,

Ax

converges much faster to the limit than

x

does, as demonstrated by the example calculations below.Usually, it is much cheaper to calculate

Ax

(involving only calculation of differences, one multiplication and one division) than to calculate many more terms of the sequence

x

. Care must be taken, however, to avoid introducing errors due to insufficient precision when calculating the differences in the numerator and denominator of the expression.

Example calculations

Example 1: The value of

\sqrt{2}1.4142136

can be approximated by assuming an initial value for

a0

and iterating the following:a_ = \frac. Starting with

a0=1:

= iterated value
011.4285714
11.51.4141414
21.41666671.4142136
31.4142157--
41.4142136--

It is worth noting here that Aitken's method does not save two iteration steps; computation of the first three Ax values required the first five x values. Also, the second Ax value is decidedly inferior to the 4th x value, mostly due to the fact that Aitken's process assumes linear, rather than quadratic, convergence.

Example 2: The value of

\pi
4
may be calculated as an infinite sum:

\frac = \sum_^\infty \frac \approx 0.785398

term = partial sum
0110.79166667
1-0.333333330.666666670.78333333
20.20.866666670.78630952
3-0.142857140.723809520.78492063
40.111111110.834920630.78567821
5-9.0909091×10-20.744011540.78522034
67.6923077×10-20.820934620.78551795
7-6.6666667×10-20.75426795--
85.8823529×10-20.81309148--

In this example, Aitken's method is applied to a sublinearly converging series, accelerating convergence considerably. It is still sublinear, but much faster than the original convergence: the first Ax value, whose computation required the first three x values, is closer to the limit than the eighth x value.

Example pseudocode for Aitken extrapolation

The following is an example of using the Aitken extrapolation to help find the limit of the sequence

xn+1=f(xn)

when given some initial

x0,

where the limit of this sequence is assumed to be a fixed point

f

(say

\alpha=f(\alpha)

). For instance, if the sequence is given by x_ = \frac \left(x_n + \frac\right) with starting point

x0=1,

then the function will be f(x) := \frac\left(x + \frac\right), which has

\alpha:=\sqrt{2}

as a fixed point (see Methods of computing square roots); it is this fixed point whose value will be approximated.

This pseudo code also computes the Aitken approximation to

f\prime(\alpha)

. The Aitken extrapolates will be denoted by aitkenX. During the computation of the extrapolate, it is important to check if the denominator becomes too small, which could happen if we already have a large amount of accuracy; without this check, a large amount of error could be introduced by the division. This small number will be denoted by epsilon. Because the binary representation of the fixed point could be infinite (or at least too large to fit in the available memory), the calculation will stop once the approximation is within tolerance of the true value.

%These choices depend on the problem being solvedx0 = 1 %The initial valuef(x) = (1/2)*(x + 2/x) %The function that finds the next element in the sequencetolerance = 10^-10 %10 digit accuracy is desiredepsilon = 10^-16 %Do not divide by a number smaller than this

maxIterations = 20 %Do not allow the iterations to continue indefinitelyhaveWeFoundSolution = false %Were we able to find the solution to within the desired tolerance? not yet

for i = 1 : maxIterations x1 = f(x0) x2 = f(x1)

if (x1 ~= x0) lambda = absoluteValue((x2 - x1)/(x1 - x0)) %OPTIONAL: Computes an approximation of |f'(fixedPoint)|, which is denoted by lambda end

denominator = (x2 - x1) - (x1 - x0);

if (absoluteValue(denominator) < epsilon) %To avoid greatly increasing error, do not divide by too small of a number print('WARNING: denominator is too small') break %Leave the loop end

aitkenX = x2 - ((x2 - x1)^2)/denominator if (absoluteValue(aitkenX - x2) < tolerance) %If the value is within tolerance print("The fixed point is ", aitkenX)) %Display the result of the Aitken extrapolation haveWeFoundSolution = true break %Done, so leave the loop end

x0 = aitkenX %Update x0 to start again end

if (haveWeFoundSolution

See also

Notes

  1. Aitken . Alexander . Alexander Aitken . On Bernoulli's numerical solution of algebraic equations . Proceedings of the Royal Society of Edinburgh . 1926 . 46 . 289 - 305 . 10.1017/S0370164600022070.

References