The Aberth method, or Aberth–Ehrlich method or Ehrlich–Aberth method, named after Oliver Aberth[1] and Louis W. Ehrlich,[2] is a root-finding algorithm developed in 1967 for simultaneous approximation of all the roots of a univariate polynomial.
This method converges cubically, an improvement over the Durand–Kerner method, another algorithm for approximating all roots at once, which converges quadratically. (However, both algorithms converge linearly at multiple zeros.)
This method is used in MPSolve, which is the reference software for approximating all roots of a polynomial to an arbitrary precision.
Let
n+p | |
p(x)=p | |
n-1 |
xn-1+ … +p1x+p0
n
* | |
z | |
n |
p(x)
* | |
p(x)=p | |
n). |
Although those numbers are unknown, upper and lower bounds for their absolute values are computable from the coefficients of the polynomial. Now one can pick
n
p(x)
Let
z1,...,zn\inC
p(x)
w1,...,wn\inC
w | ||||||||||||||
|
where
p'(zk)
p
zk
The next set of approximations of roots of
p(x)
z1-w1,...,zn-wn
Conceptually, this method uses an electrostatic analogy, modeling the approximated zeros as movable negative point charges, which converge toward the true zeros, represented by fixed positive point charges. A direct application of Newton's method to each approximated zero will often cause multiple starting points to incorrectly converge to the same root. The Aberth method avoids this by also modeling the repulsive effect the movable charges have on each other. In this way, when a movable charge has converged on a zero, their charges will cancel out, so that other movable charges are no longer attracted to that location, encouraging them to converge to other "unoccupied" zeros. (Stieltjes also modeled the positions of zeros of polynomials as solutions to electrostatic problems.)
Inside the formula of the Aberth method one can find elements of Newton's method and the Durand–Kerner method. Details for an efficient implementation, esp. on the choice of good initial approximations, can be found in Bini (1996).[3]
The updates of the roots may be executed as a simultaneous Jacobi-like iteration where first all new approximations are computed from the old approximations or as a sequential Gauss–Seidel-like iteration that uses each new approximation from the time it is computed.
A very similar method is the Newton-Maehly method. It computes the zeros one after another, but instead of an explicit deflation it divides by the already acquired linear factors on the fly. The Aberth method is like the Newton-Maehly method for computing the last root while pretending you have already found the other ones.[4]
The iteration formula is the univariate Newton iteration for the function
F(x)= | p(x) | |||||
|
If the values
z1,...,zn
p(x)
F(x)
zk
z1,...,zk-1,zk+1,...,zn
zk
The Newton step
\tfrac{F(x)}{F'(x)}
\begin{align}
F'(x) | |
F(x) |
&=
d | |
dx |
ln|F(x)|\\ &=
d | |
dx |
nln|x-z | |
(ln|p(x)|-\sum | |
j|)\\ |
&=
p'(x) | |
p(x) |
| ||||
-\sum | ||||
j=1;j\nek |
Thus, the new approximation is computed as
zk'=z
|
=z | |||||||
|
| ||||
j}}, |