Rouché's theorem, named after Eugène Rouché, states that for any two complex-valued functions and holomorphic inside some region
K
\partialK
\partialK
K
\partialK
The theorem is usually used to simplify the problem of locating zeros, as follows. Given an analytic function, we write it as the sum of two parts, one of which is simpler and grows faster than (thus dominates) the other part. We can then locate the zeros by looking at only the dominating part. For example, the polynomial
z5+3z3+7
|z|<2
|3z3+7|\le31<32=|z5|
|z|=2
z5
It is possible to provide an informal explanation of Rouché's theorem.
Let C be a closed, simple curve (i.e., not self-intersecting). Let h(z) = f(z) + g(z). If f and g are both holomorphic on the interior of C, then h must also be holomorphic on the interior of C. Then, with the conditions imposed above, the Rouche's theorem in its original (and not symmetric) form says that
Notice that the condition |f(z)| > |h(z) - f(z)| means that for any z, the distance from f(z) to the origin is larger than the length of h(z) - f(z), which in the following picture means that for each point on the blue curve, the segment joining it to the origin is larger than the green segment associated with it. Informally we can say that the blue curve f(z) is always closer to the red curve h(z) than it is to the origin.
The previous paragraph shows that h(z) must wind around the origin exactly as many times as f(z). The index of both curves around zero is therefore the same, so by the argument principle, and must have the same number of zeros inside .
One popular, informal way to summarize this argument is as follows: If a person were to walk a dog on a leash around and around a tree, such that the distance between the person and the tree is always greater than the length of the leash, then the person and the dog go around the tree the same number of times.
Consider the polynomial
z2+2az+b2
a>b>0
-a\pm\sqrt{a2-b2}
Rouché's theorem says that the polynomial has exactly one zero inside the disk
|z|<b
-a-\sqrt{a2-b2}
-a+\sqrt{a2-b2}
In general, a polynomial
f(z)=anzn+ … +a0
|ak|rk>\sumj ≠ |aj|rj
r>0,k\in0:n
k
B(0,r)
This sort of argument can be useful in locating residues when one applies Cauchy's residue theorem.
Rouché's theorem can also be used to give a short proof of the fundamental theorem of algebra. Letand choose
R>0
anzn
n
|z|<R
R>0
p
One advantage of this proof over the others is that it shows not only that a polynomial must have a zero but the number of its zeros is equal to its degree (counting, as usual, multiplicity).
Another use of Rouché's theorem is to prove the open mapping theorem for analytic functions. We refer to the article for the proof.
A stronger version of Rouché's theorem was published by Theodor Estermann in 1962.[1] It states: let
K\subsetG
\partialK
f,g\inlH(G)
K
\partialK.
The original version of Rouché's theorem then follows from this symmetric version applied to the functions
f+g,f
|f(z)+g(z)|\ge0
f(z)+g(z)=0
z\in\partialK
|g(z)|=|f(z)|
The statement can be understood intuitively as follows.By considering
-g
g
|f(z)+g(z)|<|f(z)|+|g(z)|
z\in\partialK
|f(z)+g(z)|\leq|f(z)|+|g(z)|
|f(z)+g(z)| ≠ |f(z)|+|g(z)|
\partialK
z\in\partialK
f(z)
g(z)
\arg{f(z)} ≠ \arg{g(z)}
Intuitively, if the values of
f
g
z
\partialK
f(z)
g(z)
Let
C\colon[0,1]\toC
\partialK
\partialK
f\circC
f\circC
g\circC
. The Theory of Functions . 2nd . Oxford University Press . 1939 . 0-19-853349-7 . 117–119, 198–203 . Edward Charles Titchmarsh .