In numerical analysis, the Weierstrass method or Durand–Kerner method, discovered by Karl Weierstrass in 1891 and rediscovered independently by Durand in 1960 and Kerner in 1966, is a root-finding algorithm for solving polynomial equations.[1] In other words, the method can be used to solve numerically the equation
f(x) = 0,
where f is a given polynomial, which can be taken to be scaled so that the leading coefficient is 1.
This explanation considers equations of degree four. It is easily generalized to other degrees.
Let the polynomial f be defined by
f(x)=x4+ax3+bx2+cx+d
for all x.
The known numbers a, b, c, d are the coefficients.
Let the (potentially complex) numbers P, Q, R, S be the roots of this polynomial f.
Then
f(x)=(x-P)(x-Q)(x-R)(x-S)
for all x. One can isolate the value P from this equation:
P=x-
f(x) | |
(x-Q)(x-R)(x-S) |
.
So if used as a fixed-point iteration
x1:=x0-
f(x0) | |
(x0-Q)(x0-R)(x0-S) |
,
Furthermore, if one replaces the zeros Q, R and Sby approximations q ≈ Q, r ≈ R, s ≈ S,such that q, r, s are not equal to P, then Pis still a fixed point of the perturbed fixed-point iteration
xk+1:=xk-
f(xk) | |
(xk-q)(xk-r)(xk-s) |
,
P-
f(P) | |
(P-q)(P-r)(P-s) |
=P-0=P.
Note that the denominator is still different from zero.This fixed-point iteration is a contraction mappingfor x around P.
The clue to the method now is to combinethe fixed-point iteration for P with similar iterationsfor Q, R, S into a simultaneous iteration for all roots.
Initialize p, q, r, s:
p0 := (0.4 + 0.9i)0,
q0 := (0.4 + 0.9i)1,
r0 := (0.4 + 0.9i)2,
s0 := (0.4 + 0.9i)3.
There is nothing special about choosing 0.4 + 0.9i except that it is neither a real number nor a root of unity.
Make the substitutions for n = 1, 2, 3, ...:
pn=pn-1-
f(pn-1) | |
(pn-1-qn-1)(pn-1-rn-1)(pn-1-sn-1) |
,
qn=qn-1-
f(qn-1) | |
(qn-1-pn)(qn-1-rn-1)(qn-1-sn-1) |
,
rn=rn-1-
f(rn-1) | |
(rn-1-pn)(rn-1-qn)(rn-1-sn-1) |
,
sn=sn-1-
f(sn-1) | |
(sn-1-pn)(sn-1-qn)(sn-1-rn) |
.
Re-iterate until the numbers p, q, r, sessentially stop changing relative to the desired precision.They then have the values P, Q, R, S in some orderand in the chosen precision. So the problem is solved.
Note that complex number arithmetic must be used,and that the roots are found simultaneously rather than one at a time.
This iteration procedure, like the Gauss–Seidel method for linear equations,computes one number at a time based on the already computed numbers.A variant of this procedure, like the Jacobi method,computes a vector of root approximations at a time.Both variants are effective root-finding algorithms.
One could also choose the initial values for p, q, r, sby some other procedure, even randomly, but in a way that
1+max(|a|,|b|,|c|,|d|)
which may increasingly become a concernas the degree of the polynomial increases.
If the coefficients are real and the polynomial has odd degree, then it must have at least one real root. To find this, use a real value of p0 as the initial guess and make q0 and r0, etc., complex conjugate pairs. Then the iteration will preserve these properties; that is, pn will always be real, and qn and rn, etc., will always be conjugate. In this way, the pn will converge to a real root P. Alternatively, make all of the initial guesses real; they will remain so.
This example is from the reference 1992. The equation solved is . The first 4 iterations move p, q, r seemingly chaotically, but then the roots are located to 1 decimal. After iteration number 5 we have 4 correct decimals, and the subsequent iteration number 6 confirms that the computed roots are fixed. This general behaviour is characteristic for the method. Also notice that, in this example, the roots are used as soon as they are computed in each iteration. In other words, the computation of each second column uses the value of the previous computed columns.
it.-no. | p | q | r |
---|---|---|---|
0 | +1.0000 + 0.0000i | +0.4000 + 0.9000i | -0.6500 + 0.7200i |
1 | +1.3608 + 2.0222i | -0.3658 + 2.4838i | -2.3858 - 0.0284i |
2 | +2.6597 + 2.7137i | +0.5977 + 0.8225i | -0.6320-1.6716i |
3 | +2.2704 + 0.3880i | +0.1312 + 1.3128i | +0.2821 - 1.5015i |
4 | +2.5428 - 0.0153i | +0.2044 + 1.3716i | +0.2056 - 1.3721i |
5 | +2.5874 + 0.0000i | +0.2063 + 1.3747i | +0.2063 - 1.3747i |
6 | +2.5874 + 0.0000i | +0.2063 + 1.3747i | +0.2063 - 1.3747i |
For every n-tuple of complex numbers, there is exactly one monic polynomial of degree n that has them as its zeros (keeping multiplicities). This polynomial is given by multiplying all the corresponding linear factors, that is
g\vec(X)=(X-z1) … (X-zn).
This polynomial has coefficients that depend on the prescribed zeros,
g\vec
n+g | |
(X)=X | |
n-1 |
(\vecz)Xn-1+ … +g0(\vecz).
Those coefficients are, up to a sign, the elementary symmetric polynomials
\alpha1(\vecz),...,\alphan(\vecz)
To find all the roots of a given polynomial
\begin{matrix} c0&=&g0(\vec
n\alpha | |
z)&=&(-1) | |
n(\vec |
nz | |
z)&=&(-1) | |
1 … |
zn\\ c1&=&g1(\vecz)&=&(-1)n-1\alphan-1(\vecz)\\ &\vdots&\\ cn-1&=&gn-1(\vecz)&=&-\alpha1(\vecz)&=&-(z1+z2+ … +zn). \end{matrix}
The Durand–Kerner method is obtained as the multidimensional Newton's method applied to this system. It is algebraically more comfortable to treat those identities of coefficients as the identity of the corresponding polynomials,
g\vec(X)=f(X)
\vecz
\vecw
g\vec(X)=f(X)
f(X)-g\vec
| ||||
(X)=\sum | ||||
k=1 |
wk=-\sum
n | |
k=1 |
wk\prodj\ne(X-zj).
If the numbers
z1,...,zn
C[X]n-1
\vecw
\vecw
n | |
-\sum | |
k=1 |
wk\prodj\ne(X-zj)=f(X)-\prod
n(X-z | |
j) |
at the points
X=zk
-wk\prodj\ne(zk-zj)=-wkg\vec'(zk)=f(zk)
w | ||||
|
.
In the quotient ring (algebra) of residue classes modulo ƒ(X), the multiplication by X defines an endomorphism that has the zeros of ƒ(X) as eigenvalues with the corresponding multiplicities. Choosing a basis, the multiplication operator is represented by its coefficient matrix A, the companion matrix of ƒ(X) for this basis.
Since every polynomial can be reduced modulo ƒ(X) to a polynomial of degree n - 1 or lower, the space of residue classes can be identified with the space of polynomials of degree bounded by n - 1. A problem specific basis can be taken from Lagrange interpolation as the set of n polynomials
bk(X)=\prod1\le(X-zj), k=1,...,n,
where
z1,...,zn\inC
L | ||||
|
For the multiplication operator applied to the basis polynomials one obtains from the Lagrange interpolation
X ⋅ bk(X)\modf(X)=X ⋅ bk(X)-f(X) |
bk(zj)-f(zj)) ⋅
| |||||||||
=zk ⋅ bk(X)+\sum
wj ⋅ bj(X) |
w | ||||
|
The companion matrix of ƒ(X) is therefore
A=diag(z1,...,zn) +\begin{pmatrix}1\\\vdots\\1\end{pmatrix} ⋅ \left(w1,...,wn\right).
From the transposed matrix case of the Gershgorin circle theorem it follows that all eigenvalues of A, that is, all roots of ƒ(X), are contained in the union of the disks
D(ak,k,rk)
rk=\sumj\ne|aj,k|
Here one has
ak,k=zk+wk
rk=(n-1)\left|wk\right|
z1,...,zn\inC
Every conjugate matrix
TAT-1
zk
zk
The connection between the Taylor series expansion and Newton's method suggests that the distance from
zk+wk
2) | |
O(|w | |
k| |
\vecz=(z1,...,zn)
For the conclusion of linear convergence there is a more specific result (see ref. Petkovic et al. 1995). If the initial vector
\vecz
\vecw=(w1,...,wn)
max1|wk|\le
1 | |
5n |
min1|zk-zj|,
then this inequality also holds for all iterates, all inclusion disks
D(zk+wk,(n-1)|wk|)
D\left(zk+wk,\tfrac14|wk|\right), k=1,...,n,
each containing exactly one zero of f.
The Weierstrass / Durand-Kerner method is not generally convergent: in other words, it is not true that for every polynomial, the set of initial vectors that eventually converges to roots is open and dense. In fact, there are open sets of polynomials that have open sets of initial vectors that converge to periodic cycles other than roots (see Reinke et al.)