In mathematics, Graeffe's method or Dandelin - Lobachesky - Graeffe method is an algorithm for finding all of the roots of a polynomial. It was developed independently by Germinal Pierre Dandelin in 1826 and Lobachevsky in 1834. In 1837 Karl Heinrich Gräffe also discovered the principal idea of the method.[1] The method separates the roots of a polynomial by squaring them repeatedly. This squaring of the roots is done implicitly, that is, only working on the coefficients of the polynomial. Finally, Viète's formulas are used in order to approximate the roots.
Let be a polynomial of degree
p(x)=(x-x1) … (x-xn).
Then
p(-x)=(-1)n(x+x1) … (x+xn).
Let be the polynomial which has the squares
2, | |
x | |
1 |
… ,
2 | |
x | |
n |
q(x)=\left
2 | |
(x-x | |
1 |
\right) … \left
2 | |
(x-x | |
n |
\right).
Then we can write:
\begin{align} q(x2)&=\left
2 | |
(x | |
1 |
\right) … \left
2 | |
(x | |
n |
\right)\\ &=(x-x1)(x+x1) … (x-xn)(x+xn)\\ &=\left\{(x-x1) … (x-xn)\right\} x \left\{(x+x1) … (x+xn)\right\}\\ &=p(x) x \left\{(-1)n(-x-x1) … (-x-xn)\right\}\\ &=p(x) x \left\{(-1)np(-x)\right\}\\ &=(-1)np(x)p(-x) \end{align}
can now be computed by algebraic operations on the coefficients of the polynomial alone. Let:
\begin{align} p(x)&=
n-1 | |
x | |
1x |
+ … +an-1x+an\\ q(x)&=
n-1 | |
x | |
1x |
+ … +bn-1x+bn \end{align}
then the coefficients are related by
k | |
b | |
k=(-1) |
2 | |
a | |
k |
+
k-1 | |
2\sum | |
j=0 |
ja | |
(-1) | |
ja |
2k-j, a0=b0=1.
Graeffe observed that if one separates into its odd and even parts:
p(x)=pe\left(x2\right)+xpo\left(x2\right),
then one obtains a simplified algebraic expression for :
q(x)=(-1)n\left
2-x | |
(p | |
e(x) |
2 | |
p | |
o(x) |
\right).
This expression involves the squaring of two polynomials of only half the degree, and is therefore used in most implementations of the method.
Iterating this procedure several times separates the roots with respect to their magnitudes. Repeating k times gives a polynomial of degree :
qk(y)=yn+
n-1 | |
{a | |
1y |
+ … +
k} | |
{a | |
n-1 |
y+
k} | |
{a | |
n |
with roots
y1=x
2k | |
1 |
,y2=x
2k | |
2 |
,...,yn=x
2k | |
n |
.
If the magnitudes of the roots of the original polynomial were separated by some factor
\rho>1
|xk|\ge\rho|xk+1|
2k | |
\rho |
\ge1+2k(\rho-1)
Next the Vieta relations are used
k | |
\begin{align} a | |
1 |
&=-(y1+y2+ … +y
k | |
2 |
&=y1y2+y1y3+ … +yn-1yn\\
k | |
& \vdots\\ a | |
n |
&=
n(y | |
(-1) | |
1 |
y2 … yn). \end{align}
If the roots
x1,...,xn
\rho>1
|xm|\ge\rho|xm+1|
y1,y2,...,yn
2k | |
\rho |
The coefficients of the iterated polynomial can then be approximated by their leading term,
k | |
a | |
1 |
≈ -y1
k | |
a | |
2 |
≈ y1y2
y1 ≈
k | |
-a | |
1 |
, y2 ≈
k | |
-a | |
2 |
k | |
/a | |
1 |
, ... yn ≈
k | |
-a | |
n |
k | |
/a | |
n-1 |
.
Finally, logarithms are used in order to find the absolute values of the roots of the original polynomial. These magnitudes alone are already useful to generate meaningful starting points for other root-finding methods.
To also obtain the angle of these roots, a multitude of methods has been proposed, the most simple one being to successively compute the square root of a (possibly complex) root of
qm(y)
qm-1(x)
qm-2(x)
qm-1(x)
Graeffe's method works best for polynomials with simple real roots, though it can be adapted for polynomials with complex roots and coefficients, and roots with higher multiplicity. For instance, it has been observed[2] that for a root
x\ell+1=x\ell+2=...=x\ell+d
\left| |
| |||||||||
|
\right|
\binom{d}{i}
i=0,1,...,d
From a numerical point of view, this method is problematic since the coefficients of the iterated polynomials span very quickly many orders of magnitude, which implies serious numerical errors. One second, but minor concern is that many different polynomials lead to the same Graeffe iterates.
This method replaces the numbers by truncated power series of degree 1, also known as dual numbers. Symbolically, this is achieved by introducing an "algebraic infinitesimal"
\varepsilon
\varepsilon2=0
p(x+\varepsilon)=p(x)+\varepsilonp'(x)
xm-\varepsilon
2k | |
(x | |
m-\varepsilon) |
2k | |
=x | |
m |
2k-1 | |
-\varepsilon{2 | |
m |
=y | |||
|
m.
xm
ky | |||
x | |||
|
m}.
This kind of computation with infinitesimals is easy to implement analogous to the computation with complex numbers. If one assumes complex coordinates or an initial shift by some randomly chosen complex number, then all roots of the polynomial will be distinct and consequently recoverable with the iteration.
Every polynomial can be scaled in domain and range such that in the resulting polynomial the first and the last coefficient have size one. If the size of the inner coefficients is bounded by M, then the size of the inner coefficients after one stage of the Graeffe iteration is bounded by
nM2
2k-1 | |
n |
2k | |
M |
To overcome the limit posed by the growth of the powers, Malajovich–Zubelli propose to represent coefficients and intermediate results in the kth stage of the algorithm by a scaled polar form
-2kr | |
c=\alphae |
,
\alpha= | c |
|c| |
r=-2-klog|c|
2k
Multiplication of two numbers of this type is straightforward, whereas addition is performed following the factorization
c3=c1+c2=|c1| ⋅ \left(\alpha1+\alpha2\tfrac{|c2|}{|c1|}\right)
c1
r1<r2
\alpha3=\tfrac{s}{|s|}
r3=r
-k | |
1+2 |
log{|s|}
s=\alpha1+\alpha
| ||||||||||
2e |
.
The coefficients
a0,a1,...,an
(\alpham,rm)
m=0,...,n
\{(m,rm): m=0,...,n\}