The fundamental theorem of algebra, also called d'Alembert's theorem[1] or the d'Alembert–Gauss theorem,[2] states that every non-constant single-variable polynomial with complex coefficients has at least one complex root. This includes polynomials with real coefficients, since every real number is a complex number with its imaginary part equal to zero.
Equivalently (by definition), the theorem states that the field of complex numbers is algebraically closed.
The theorem is also stated as follows: every non-zero, single-variable, degree n polynomial with complex coefficients has, counted with multiplicity, exactly n complex roots. The equivalence of the two statements can be proven through the use of successive polynomial division.
Despite its name, it is not fundamental for modern algebra; it was named when algebra was synonymous with the theory of equations.
, in his book Arithmetica Philosophica (published in 1608, at Nürnberg, by Johann Lantzenberger),[3] wrote that a polynomial equation of degree n (with real coefficients) may have n solutions. Albert Girard, in his book L'invention nouvelle en l'Algèbre (published in 1629), asserted that a polynomial equation of degree n has n solutions, but he did not state that they had to be real numbers. Furthermore, he added that his assertion holds "unless the equation is incomplete", by which he meant that no coefficient is equal to 0. However, when he explains in detail what he means, it is clear that he actually believes that his assertion is always true; for instance, he shows that the equation
x4=4x-3,
-1+i\sqrt{2},
-1-i\sqrt{2}.
As will be mentioned again below, it follows from the fundamental theorem of algebra that every non-constant polynomial with real coefficients can be written as a product of polynomials with real coefficients whose degrees are either 1 or 2. However, in 1702 Leibniz erroneously said that no polynomial of the type (with real and distinct from 0) can be written in such a way. Later, Nikolaus Bernoulli made the same assertion concerning the polynomial, but he got a letter from Euler in 1742[4] in which it was shown that this polynomial is equal to
\left(x2-(2+\alpha)x+1+\sqrt{7}+\alpha\right)\left(x2-(2-\alpha)x+1+\sqrt{7}-\alpha\right),
\alpha=\sqrt{4+2\sqrt{7}}.
x4+a4=\left(x2+a\sqrt{2} ⋅ x+a2\right)\left(x2-a\sqrt{2} ⋅ x+a2\right).
A first attempt at proving the theorem was made by d'Alembert in 1746, but his proof was incomplete. Among other problems, it assumed implicitly a theorem (now known as Puiseux's theorem), which would not be proved until more than a century later and using the fundamental theorem of algebra. Other attempts were made by Euler (1749), de Foncenex (1759), Lagrange (1772), and Laplace (1795). These last four attempts assumed implicitly Girard's assertion; to be more precise, the existence of solutions was assumed and all that remained to be proved was that their form was a + bi for some real numbers a and b. In modern terms, Euler, de Foncenex, Lagrange, and Laplace were assuming the existence of a splitting field of the polynomial p(z).
At the end of the 18th century, two new proofs were published which did not assume the existence of roots, but neither of which was complete. One of them, due to James Wood and mainly algebraic, was published in 1798 and it was totally ignored. Wood's proof had an algebraic gap.[5] The other one was published by Gauss in 1799 and it was mainly geometric, but it had a topological gap, only filled by Alexander Ostrowski in 1920, as discussed in Smale (1981).[6]
The first rigorous proof was published by Argand, an amateur mathematician, in 1806 (and revisited in 1813); it was also here that, for the first time, the fundamental theorem of algebra was stated for polynomials with complex coefficients, rather than just real coefficients. Gauss produced two other proofs in 1816 and another incomplete version of his original proof in 1849.
The first textbook containing a proof of the theorem was Cauchy's Cours d'analyse de l'École Royale Polytechnique (1821). It contained Argand's proof, although Argand is not credited for it.
None of the proofs mentioned so far is constructive. It was Weierstrass who raised for the first time, in the middle of the 19th century, the problem of finding a constructive proof of the fundamental theorem of algebra. He presented his solution, which amounts in modern terms to a combination of the Durand–Kerner method with the homotopy continuation principle, in 1891. Another proof of this kind was obtained by Hellmuth Kneser in 1940 and simplified by his son Martin Kneser in 1981.
Without using countable choice, it is not possible to constructively prove the fundamental theorem of algebra for complex numbers based on the Dedekind real numbers (which are not constructively equivalent to the Cauchy real numbers without countable choice).[7] However, Fred Richman proved a reformulated version of the theorem that does work.[8]
There are several equivalent formulations of the theorem:
This implies immediately the previous assertion, as real numbers are also complex numbers. The converse results from the fact that one gets a polynomial with real coefficients by taking the product of a polynomial and its complex conjugate (obtained by replacing each coefficient with its complex conjugate). A root of this product is either a root of the given polynomial, or of its conjugate; in the latter case, the conjugate of this root is a root of the given polynomial.
c,r1,\ldots,rn
The complex numbers
r1,\ldots,rn
The proof that this statement results from the previous ones is done by recursion on : when a root
r1
x-r1
n-1
\overliner
(x-r)(x-\overliner)
pi
All proofs below involve some mathematical analysis, or at least the topological concept of continuity of real or complex functions. Some also use differentiable or even analytic functions. This requirement has led to the remark that the Fundamental Theorem of Algebra is neither fundamental, nor a theorem of algebra.[9]
Some proofs of the theorem only prove that any non-constant polynomial with real coefficients has some complex root. This lemma is enough to establish the general case because, given a non-constant polynomial with complex coefficients, the polynomial
q=p\overline{p},
\overline{p}
\overline{p}
Many non-algebraic proofs of the theorem use the fact (sometimes called the "growth lemma") that a polynomial function p(z) of degree n whose dominant coefficient is 1 behaves like zn when |z| is large enough. More precisely, there is some positive real number R such that
\tfrac{1}{2}|zn|<|p(z)|<\tfrac{3}{2}|zn|
when |z| > R.
Even without using complex numbers, it is possible to show that a real-valued polynomial p(x): p(0) ≠ 0 of degree n > 2 can always be divided by some quadratic polynomial with real coefficients.[10] In other words, for some real-valued a and b, the coefficients of the linear remainder on dividing p(x) by x2 − ax − b simultaneously become zero.
p(x)=(x2-ax-b)q(x)+xRp(x)(a,b)+Sp(x)(a,b),
where q(x) is a polynomial of degree n − 2. The coefficients Rp(x)(a, b) and Sp(x)(a, b) are independent of x and completely defined by the coefficients of p(x). In terms of representation, Rp(x)(a, b) and Sp(x)(a, b) are bivariate polynomials in a and b. In the flavor of Gauss's first (incomplete) proof of this theorem from 1799, the key is to show that for any sufficiently large negative value of b, all the roots of both Rp(x)(a, b) and Sp(x)(a, b) in the variable a are real-valued and alternating each other (interlacing property). Utilizing a Sturm-like chain that contain Rp(x)(a, b) and Sp(x)(a, b) as consecutive terms, interlacing in the variable a can be shown for all consecutive pairs in the chain whenever b has sufficiently large negative value. As Sp(a, b = 0) = p(0) has no roots, interlacing of Rp(x)(a, b) and Sp(x)(a, b) in the variable a fails at b = 0. Topological arguments can be applied on the interlacing property to show that the locus of the roots of Rp(x)(a, b) and Sp(x)(a, b) must intersect for some real-valued a and b < 0.
Find a closed disk D of radius r centered at the origin such that |p(z)| > |p(0)| whenever |z| ≥ r. The minimum of |p(z)| on D, which must exist since D is compact, is therefore achieved at some point z0 in the interior of D, but not at any point of its boundary. The maximum modulus principle applied to 1/p(z) implies that p(z0) = 0. In other words, z0 is a zero of p(z).
A variation of this proof does not require the maximum modulus principle (in fact, a similar argument also gives a proof of the maximum modulus principle for holomorphic functions). Continuing from before the principle was invoked, if a := p(z0) ≠ 0, then, expanding p(z) in powers of z − z0, we can write
p(z)=a+ck
k | |
(z-z | |
0) |
+ck+1
k+1 | |
(z-z | |
0) |
+ … +cn
n. | |
(z-z | |
0) |
Here, the cj are simply the coefficients of the polynomial z → p(z + z0) after expansion, and k is the index of the first non-zero coefficient following the constant term. For z sufficiently close to z0 this function has behavior asymptotically similar to the simpler polynomial
q(z)=a+ck
k | |
(z-z | |
0) |
\left| | p(z)-q(z) | |||||
|
\right|\leqM
for some positive constant M in some neighborhood of z0. Therefore, if we define
\theta0=(\arg(a)+\pi-\arg(ck))/k
z=z0+r
i\theta0 | |
e |
\begin{align} |p(z)|&\le|q(z)|+rk+1\left|
p(z)-q(z) | |
rk+1 |
\right|\\[4pt] &\le\left|a+(-1)ckrk
i(\arg(a)-\arg(ck)) | |
e |
\right|+Mrk+1\\[4pt] &=
k | |
|a|-|c | |
k|r |
+Mrk+1\end{align}
When r is sufficiently close to 0 this upper bound for |p(z)| is strictly smaller than |a|, contradicting the definition of z0. Geometrically, we have found an explicit direction θ0 such that if one approaches z0 from that direction one can obtain values p(z) smaller in absolute value than |p(z0)|.
Another analytic proof can be obtained along this line of thought observing that, since |p(z)| > |p(0)| outside D, the minimum of |p(z)| on the whole complex plane is achieved at z0. If |p(z0)| > 0, then 1/p is a bounded holomorphic function in the entire complex plane since, for each complex number z, |1/p(z)| ≤ |1/p(z0)|. Applying Liouville's theorem, which states that a bounded entire function must be constant, this would imply that 1/p is constant and therefore that p is constant. This gives a contradiction, and hence p(z0) = 0.[11]
Yet another analytic proof uses the argument principle. Let R be a positive real number large enough so that every root of p(z) has absolute value smaller than R; such a number must exist because every non-constant polynomial function of degree n has at most n zeros. For each r > R, consider the number
1 | |
2\pii |
\intc(r)
p'(z) | |
p(z) |
dz,
where c(r) is the circle centered at 0 with radius r oriented counterclockwise; then the argument principle says that this number is the number N of zeros of p(z) in the open ball centered at 0 with radius r, which, since r > R, is the total number of zeros of p(z). On the other hand, the integral of n/z along c(r) divided by 2πi is equal to n. But the difference between the two numbers is
1 | |
2\pii |
\intc(r)\left(
p'(z) | - | |
p(z) |
n | \right)dz= | |
z |
1 | |
2\pii |
\intc(r)
zp'(z)-np(z) | |
zp(z) |
dz.
The numerator of the rational expression being integrated has degree at most n − 1 and the degree of the denominator is n + 1. Therefore, the number above tends to 0 as r → +∞. But the number is also equal to N − n and so N = n.
Another complex-analytic proof can be given by combining linear algebra with the Cauchy theorem. To establish that every complex polynomial of degree n > 0 has a zero, it suffices to show that every complex square matrix of size n > 0 has a (complex) eigenvalue.[12] The proof of the latter statement is by contradiction.
Let A be a complex square matrix of size n > 0 and let In be the unit matrix of the same size. Assume A has no eigenvalues. Consider the resolvent function
-1 | |
R(z)=(zI | |
n-A) |
,
which is a meromorphic function on the complex plane with values in the vector space of matrices. The eigenvalues of A are precisely the poles of R(z). Since, by assumption, A has no eigenvalues, the function R(z) is an entire function and Cauchy theorem implies that
\intc(r)R(z)dz=0.
On the other hand, R(z) expanded as a geometric series gives:
R(z)=z-1
-1 | |
(I | |
n-z |
A)-1=z-1
infty | |
\sum | |
k=0 |
1 | |
zk |
Ak ⋅
This formula is valid outside the closed disc of radius
\|A\|
r>\|A\|.
\intc(r)
infty | |
R(z)dz=\sum | |
k=0 |
\intc(r)
dz | |
zk+1 |
Ak=2\piiIn
(in which only the summand k = 0 has a nonzero integral). This is a contradiction, and so A has an eigenvalue.
Finally, Rouché's theorem gives perhaps the shortest proof of the theorem.
Suppose the minimum of |p(z)| on the whole complex plane is achieved at z0; it was seen at the proof which uses Liouville's theorem that such a number must exist. We can write p(z) as a polynomial in z − z0: there is some natural number k and there are some complex numbers ck, ck + 1, ..., cn such that ck ≠ 0 and:
p(z)=p(z0)+ck(z-z
k+c | |
k+1 |
k+1 | |
(z-z | |
0) |
+ … +cn(z-z
n. | |
0) |
If p(z0) is nonzero, it follows that if a is a kth root of −p(z0)/ck and if t is positive and sufficiently small, then |p(z0 + ta)| < |p(z0)|, which is impossible, since |p(z0)| is the minimum of |p| on D.
For another topological proof by contradiction, suppose that the polynomial p(z) has no roots, and consequently is never equal to 0. Think of the polynomial as a map from the complex plane into the complex plane. It maps any circle |z| = R into a closed loop, a curve P(R). We will consider what happens to the winding number of P(R) at the extremes when R is very large and when R = 0. When R is a sufficiently large number, then the leading term zn of p(z) dominates all other terms combined; in other words,
\left|zn\right|>\left|an-1zn-1+ … +a0\right|.
When z traverses the circle
Rei\theta
(0\leq\theta\leq2\pi),
zn=Rnein\theta
(0\leq\theta\leq2\pin)
These proofs of the Fundamental Theorem of Algebra must make use of the following two facts about real numbers that are not algebraic but require only a small amount of analysis (more precisely, the intermediate value theorem in both cases):
The second fact, together with the quadratic formula, implies the theorem for real quadratic polynomials. In other words, algebraic proofs of the fundamental theorem actually show that if R is any real-closed field, then its extension C = R is algebraically closed.
As mentioned above, it suffices to check the statement "every non-constant polynomial p(z) with real coefficients has a complex root". This statement can be proved by induction on the greatest non-negative integer k such that 2k divides the degree n of p(z). Let a be the coefficient of zn in p(z) and let F be a splitting field of p(z) over C; in other words, the field F contains C and there are elements z1, z2, ..., zn in F such that
p(z)=a(z-z1)(z-z2) … (z-zn).
If k = 0, then n is odd, and therefore p(z) has a real root. Now, suppose that n = 2km (with m odd and k > 0) and that the theorem is already proved when the degree of the polynomial has the form 2k − 1m′ with m′ odd. For a real number t, define:
qt(z)=\prod1\le\left(z-zi-zj-tzizj\right).
Then the coefficients of qt(z) are symmetric polynomials in the zi with real coefficients. Therefore, they can be expressed as polynomials with real coefficients in the elementary symmetric polynomials, that is, in −a1, a2, ..., (−1)nan. So qt(z) has in fact real coefficients. Furthermore, the degree of qt(z) is n(n − 1)/2 = 2k−1m(n − 1), and m(n − 1) is an odd number. So, using the induction hypothesis, qt has at least one complex root; in other words, zi + zj + tzizj is complex for two distinct elements i and j from . Since there are more real numbers than pairs (i, j), one can find distinct real numbers t and s such that zi + zj + tzizj and zi + zj + szizj are complex (for the same i and j). So, both zi + zj and zizj are complex numbers. It is easy to check that every complex number has a complex square root, thus every complex polynomial of degree 2 has a complex root by the quadratic formula. It follows that zi and zj are complex numbers, since they are roots of the quadratic polynomial z2 − (zi + zj)z + zizj.
Joseph Shipman showed in 2007 that the assumption that odd degree polynomials have roots is stronger than necessary; any field in which polynomials of prime degree have roots is algebraically closed (so "odd" can be replaced by "odd prime" and this holds for fields of all characteristics).[13] For axiomatization of algebraically closed fields, this is the best possible, as there are counterexamples if a single prime is excluded. However, these counterexamples rely on −1 having a square root. If we take a field where −1 has no square root, and every polynomial of degree n ∈ I has a root, where I is any fixed infinite set of odd numbers, then every polynomial f(x) of odd degree has a root (since has a root, where k is chosen so that). Mohsen Aliabadi generalized Shipman's result in 2013, providing an independent proof that a sufficient condition for an arbitrary field (of any characteristic) to be algebraically closed is that it has a root for every polynomial of prime degree.[14]
Another algebraic proof of the fundamental theorem can be given using Galois theory. It suffices to show that C has no proper finite field extension.[15] Let K/C be a finite extension. Since the normal closure of K over R still has a finite degree over C (or R), we may assume without loss of generality that K is a normal extension of R (hence it is a Galois extension, as every algebraic extension of a field of characteristic 0 is separable). Let G be the Galois group of this extension, and let H be a Sylow 2-subgroup of G, so that the order of H is a power of 2, and the index of H in G is odd. By the fundamental theorem of Galois theory, there exists a subextension L of K/R such that Gal(K/L) = H. As [''L'':'''R'''] = [''G'':''H''] is odd, and there are no nonlinear irreducible real polynomials of odd degree, we must have L = R, thus [''K'':'''R'''] and [''K'':'''C'''] are powers of 2. Assuming by way of contradiction that [''K'':'''C'''] > 1, we conclude that the 2-group Gal(K/C) contains a subgroup of index 2, so there exists a subextension M of C of degree 2. However, C has no extension of degree 2, because every quadratic complex polynomial has a complex root, as mentioned above. This shows that [''K'':'''C'''] = 1, and therefore K = C, which completes the proof.
There exists still another way to approach the fundamental theorem of algebra, due to J. M. Almira and A. Romero: by Riemannian geometric arguments. The main idea here is to prove that the existence of a non-constant polynomial p(z) without zeros implies the existence of a flat Riemannian metric over the sphere S2. This leads to a contradiction since the sphere is not flat.
A Riemannian surface (M, g) is said to be flat if its Gaussian curvature, which we denote by Kg, is identically null. Now, the Gauss–Bonnet theorem, when applied to the sphere S2, claims that
\int | |
S2 |
Kg=4\pi,
which proves that the sphere is not flat.
Let us now assume that n > 0 and
p(z)=a0+a1z+ … +anzn ≠ 0
for each complex number z. Let us define
p*(z)=znp\left(\tfrac{1}{z}\right)=a0zn+a1zn-1+ … +an.
Obviously, p*(z) ≠ 0 for all z in C. Consider the polynomial f(z) = p(z)p*(z). Then f(z) ≠ 0 for each z in C. Furthermore,
f(\tfrac{1}{w})=p\left(\tfrac{1}{w}\right)p*\left(\tfrac{1}{w}\right)=w-2np*(w)p(w)=w-2nf(w).
We can use this functional equation to prove that g, given by
g= | 1 | ||||||
|
|dw|2
for w in C, and
g= | 1 |
\left|f\left(\tfrac{1 |
{w}\right)\right
| ||||
| |
for w ∈ S2\, is a well defined Riemannian metric over the sphere S2 (which we identify with the extended complex plane C ∪ ).
Now, a simple computation shows that
\forallw\inC:
1 | |||||||
|
K | ||||
|
\Deltalog|f(w)|=
1 | |
n |
\DeltaRe(logf(w))=0,
since the real part of an analytic function is harmonic. This proves that Kg = 0.
Since the fundamental theorem of algebra can be seen as the statement that the field of complex numbers is algebraically closed, it follows that any theorem concerning algebraically closed fields applies to the field of complex numbers. Here are a few more consequences of the theorem, which are either about the field of real numbers or the relationship between the field of real numbers and the field of complex numbers:
See main article: Properties of polynomial roots. While the fundamental theorem of algebra states a general existence result, it is of some interest, both from the theoretical and from the practical point of view, to have information on the location of the zeros of a given polynomial. The simpler result in this direction is a bound on the modulus: all zeros ζ of a monic polynomial
n+a | |
z | |
n-1 |
zn-1+ … +a1z+a0
Rinfty:=1+max\{|a0|,\ldots,|an-1|\}.
As stated, this is not yet an existence result but rather an example of what is called an a priori bound: it says that if there are solutions then they lie inside the closed disk of center the origin and radius R∞. However, once coupled with the fundamental theorem of algebra it says that the disk contains in fact at least one solution. More generally, a bound can be given directly in terms of any p-norm of the n-vector of coefficients
a:=(a0,a1,\ldots,an-1),
(1,\|a\|p),
\tfrac{1}{p}+\tfrac{1}{q}=1,
R1:=max\left\{1,\sum0\leq|ak|\right\},
Rp:=\left[1+\left(\sum0\leq
p\right | |
|a | |
k| |
| ||||
) |
\right
| ||||
] |
,
for 1 < p < ∞, and in particular
R2:=\sqrt{\sum0\leq
2 | |
|a | |
k| |
}
(where we define an to mean 1, which is reasonable since 1 is indeed the n-th coefficient of our polynomial). The case of a generic polynomial of degree n,
P(z):=an
n+a | |
z | |
n-1 |
zn-1+ … +a1z+a0,
is of course reduced to the case of a monic, dividing all coefficients by an ≠ 0. Also, in case that 0 is not a root, i.e. a0 ≠ 0, bounds from below on the roots ζ follow immediately as bounds from above on
\tfrac{1}{\zeta}
a0
n-1 | |
z | |
1z |
+ … +an-1z+an.
Finally, the distance
|\zeta-\zeta0|
\zeta0
\zeta-\zeta0
P(z+\zeta0)
z=\zeta0.
Let ζ be a root of the polynomial
n+a | |
z | |
n-1 |
zn-1+ … +a1z+a0;
in order to prove the inequality |ζ| ≤ Rp we can assume, of course, |ζ| > 1. Writing the equation as
n=a | |
-\zeta | |
n-1 |
\zetan-1+ … +a1\zeta+a0,
and using the Hölder's inequality we find
|\zeta|n\leq\|a\|p\left\|\left(\zetan-1,\ldots,\zeta,1\right)\right\|q.
Now, if p = 1, this is
n\leq\|a\| | |
|\zeta| | |
1max |
\left\{|\zeta|n-1,\ldots,|\zeta|,1\right\}
n-1 | |
=\|a\| | |
1|\zeta| |
,
thus
|\zeta|\leqmax\{1,\|a\|1\}.
In the case 1 < p ≤ ∞, taking into account the summation formula for a geometric progression, we have
|\zeta|n\leq\|a\|p\left(|\zeta|q(n-1)+ … +|\zeta|q
| ||||
+1\right) |
=\|a\|p\left(
|\zeta|qn-1 | |
|\zeta|q-1 |
| ||||
\right) |
\leq\|a\|p\left(
|\zeta|qn | |
|\zeta|q-1 |
| ||||
\right) |
,
thus
|\zeta|nq\leq
q | |
\|a\| | |
p |
|\zeta|qn | |
|\zeta|q-1 |
and simplifying,
|\zeta|q\leq
q. | |
1+\|a\| | |
p |
Therefore
|\zeta|\leq\left\|\left(1,\|a\|p\right)\right\|q=Rp
holds, for all 1 ≤ p ≤ ∞.