In mathematics, Newton's identities, also known as the Girard–Newton formulae, give relations between two types of symmetric polynomials, namely between power sums and elementary symmetric polynomials. Evaluated at the roots of a monic polynomial P in one variable, they allow expressing the sums of the k-th powers of all roots of P (counted with their multiplicity) in terms of the coefficients of P, without actually finding those roots. These identities were found by Isaac Newton around 1666, apparently in ignorance of earlier work (1629) by Albert Girard. They have applications in many areas of mathematics, including Galois theory, invariant theory, group theory, combinatorics, as well as further applications outside mathematics, including general relativity.
Let x1, ..., xn be variables, denote for k ≥ 1 by pk(x1, ..., xn) the k-th power sum:
pk(x1,\ldots,xn)=\sum
n | |
i=1 |
k | |
x | |
i |
=
k, | |
x | |
n |
and for k ≥ 0 denote by ek(x1, ..., xn) the elementary symmetric polynomial (that is, the sum of all distinct products of k distinct variables), so
\begin{align} e0(x1,\ldots,xn)&=1,\\ e1(x1,\ldots,xn)&=x1+x2+ … +xn,\\ e2(x1,\ldots,xn)&=\sum1xixj,\\ & \vdots\\ en(x1,\ldots,xn)&=x1x2 … xn,\\ ek(x1,\ldots,xn)&=0, for k>n.\\ \end{align}
Then Newton's identities can be stated as
kek(x1,\ldots,xn)=
k(-1) | |
\sum | |
i=1 |
i-1ek(x1,\ldots,xn)pi(x1,\ldots,xn),
valid for all .
Also, one has
0=
k(-1) | |
\sum | |
i=k-n |
i-1ek(x1,\ldots,xn)pi(x1,\ldots,xn),
for all .
Concretely, one gets for the first few values of k:
\begin{align} e1(x1,\ldots,xn)&=p1(x1,\ldots,xn),\\ 2e2(x1,\ldots,xn)&=e1(x1,\ldots,xn)p1(x1,\ldots,xn)-p2(x1,\ldots,xn),\\ 3e3(x1,\ldots,xn)&=e2(x1,\ldots,xn)p1(x1,\ldots,xn)-e1(x1,\ldots,xn)p2(x1,\ldots,xn)+p3(x1,\ldots,xn). \end{align}
The form and validity of these equations do not depend on the number n of variables (although the point where the left-hand side becomes 0 does, namely after the n-th identity), which makes it possible to state them as identities in the ring of symmetric functions. In that ring one has
\begin{align} e1&=p1,\\ 2e2&=e1p1-p2=
2-p | |
p | |
2,\\ |
3e3&=e2p1-e1p2+p3=\tfrac12
3-\tfrac32p | |
p | |
1p |
2+p3,\\ 4e4&=e3p1-e2p2+e1p3-p4=
4 | |
\tfrac16p | |
1 |
-
2p | |
p | |
2 |
+\tfrac43p1p3+\tfrac12p
2-p | |
4,\\ \end{align} |
and so on; here the left-hand sides never become zero.These equations allow to recursively express the ei in terms of the pk; to be able to do the inverse, one may rewrite them as
\begin{align} p1&=e1,\\ p2&=e1p1-2e2=
2 | |
e | |
1 |
-2e2,\\ p3&=e1p2-e2p1+3e3=
3-3e | |
e | |
1e |
2+3e3,\\ p4&=e1p3-e2p2+e3p1-4e4=
2e | |
e | |
2+4e |
1e3+2e
2-4e | |
4, |
\\ &{} \vdots \end{align}
In general, we have
pk(x1,\ldots,xn)=(-1)k-1kek(x1,\ldots,xn)+\sum
k-1 | |
i=1 |
(-1)k-1+iek(x1,\ldots,xn)pi(x1,\ldots,xn),
valid for all n ≥k ≥ 1.
Also, one has
pk(x1,\ldots,xn)=
k-1 | |
\sum | |
i=k-n |
(-1)k-1+iek(x1,\ldots,xn)pi(x1,\ldots,xn),
for all k > n ≥ 1.
The polynomial with roots xi may be expanded as
n | |
\prod | |
i=1 |
(x-xi)=
n | |
\sum | |
k=0 |
(-1)kekxn-k,
ek(x1,\ldots,xn)
pk(x1,\ldots,xn)=
n | |
\sum | |
i=1 |
k, | |
x | |
i |
the coefficients of the polynomial with roots
x1,\ldots,xn
\begin{align} e0&=1,\\[4pt] -e1&=-p1,\\[4pt] e2&=
1 | |
2 |
(e1p1-p2),\\[4pt] -e3&=-
1 | |
3 |
(e2p1-e1p2+p3),\\[4pt] e4&=
1 | |
4 |
(e3p1-e2p2+e1p3-p4),\\ &{} \vdots \end{align}
Formulating polynomials in this way is useful in using the method of Delves and Lyness[1] to find the zeros of an analytic function.
A
A
xi
k
Ak
k | |
x | |
i |
xi
A
k | |
x | |
i |
Ak
Ak
k | |
x | |
i |
k | |
x | |
i |
k
pk
A
The Newton identities now relate the traces of the powers
Ak
A
Ak
This computation requires computing the traces of matrix powers
Ak
Rearranging the computations into an efficient form leads to the Faddeev - LeVerrier algorithm (1840), a fast parallel implementation of it is due to L. Csanky (1976). Its disadvantage is that it requires division by integers, so in general the field should have characteristic 0.
For a given n, the elementary symmetric polynomials ek(x1,...,xn) for k = 1,..., n form an algebraic basis for the space of symmetric polynomials in x1,.... xn: every polynomial expression in the xi that is invariant under all permutations of those variables is given by a polynomial expression in those elementary symmetric polynomials, and this expression is unique up to equivalence of polynomial expressions. This is a general fact known as the fundamental theorem of symmetric polynomials, and Newton's identities provide explicit formulae in the case of power sum symmetric polynomials. Applied to the monic polynomial with all coefficients ak considered as free parameters, this means that every symmetric polynomial expression S(x1,...,xn) in its roots can be expressed instead as a polynomial expression P(a1,...,an) in terms of its coefficients only, in other words without requiring knowledge of the roots. This fact also follows from general considerations in Galois theory (one views the ak as elements of a base field with roots in an extension field whose Galois group permutes them according to the full symmetric group, and the field fixed under all elements of the Galois group is the base field).
The Newton identities also permit expressing the elementary symmetric polynomials in terms of the power sum symmetric polynomials, showing that any symmetric polynomial can also be expressed in the power sums. In fact the first n power sums also form an algebraic basis for the space of symmetric polynomials.
There are a number of (families of) identities that, while they should be distinguished from Newton's identities, are very closely related to them.
Denoting by hk the complete homogeneous symmetric polynomial (that is, the sum of all monomials of degree k), the power sum polynomials also satisfy identities similar to Newton's identities, but not involving any minus signs. Expressed as identities of in the ring of symmetric functions, they read
khk=
kh | |
\sum | |
k-i |
pi,
valid for all n ≥ k ≥ 1. Contrary to Newton's identities, the left-hand sides do not become zero for large k, and the right-hand sides contain ever more non-zero terms. For the first few values of k, one has
\begin{align} h1&=p1,\\ 2h2&=h1p1+p2,\\ 3h3&=h2p1+h1p2+p3.\ \end{align}
These relations can be justified by an argument analogous to the one by comparing coefficients in power series given above, based in this case on the generating function identity
infty | |
\sum | |
k=0 |
hk(x1,\ldots,
k | |
x | |
n)t |
=
| ||||
\prod | ||||
i=1 |
xit}.
Proofs of Newton's identities, like these given below, cannot be easily adapted to prove these variants of those identities.
As mentioned, Newton's identities can be used to recursively express elementary symmetric polynomials in terms of power sums. Doing so requires the introduction of integer denominators, so it can be done in the ring ΛQ of symmetric functions with rational coefficients:
\begin{align} e1&=p1,\\ e2&=
2 | ||||
|
-
12p | |
2 |
&&=
2 | ||||
| ||||
1 |
-p2),\\ e3&=
3 | ||||
|
-
12p | |
1 |
p2+
13p | |
3 |
&&=
|
(
3 | |
p | |
1 |
-3p1p2+2p3),\\ e4&=
4 | ||||
|
-
14p | |
1 |
2p2+
18p | |
2 |
2+
13p | |
1 |
p3-
14p | |
4 |
&&=
4 | ||||
| ||||
1 |
-6
2 | |
p | |
1 |
p2+3
2 | |
p | |
2 |
+8p1p3-6p4),\\ &~~\vdots\\ en&=(-1)n
\sum | |
m1+2m2+ … +nmn=n\atopm1\ge0,\ldots,mn\ge0 |
n | |
\prod | |
i=1 |
| ||||||||||
|
\\ \end{align}
and so forth.[2] The general formula can be conveniently expressed as
ek=
(-1)k | |
k! |
Bk(-p1,-1!p2,-2!p3,\ldots,-(k-1)!pk),
where the Bn is the complete exponential Bell polynomial. This expression also leads to the following identity for generating functions:
infty | |
\sum | |
k=0 |
ektk=
infty | |
\exp\left(\sum | |
k=1 |
(-1)k+1 | |
k |
pktk\right).
The analogous relations involving complete homogeneous symmetric polynomials can be similarly developed, giving equations
\begin{align} h1&=p1,\\ h2&=
2 | ||||
|
+
12p | |
2 |
&&=
2 | ||||
| ||||
1 |
+p2),\\ h3&=
3 | ||||
|
+
12p | |
1 |
p2+
13p | |
3 |
&&=
|
(
3 | |
p | |
1 |
+3p1p2+2p3),\\ h4&=
4 | ||||
|
+
14p | |
1 |
2p2+
18p | |
2 |
2+
13p | |
1 |
p3+
14p | |
4 |
&&=
4 | ||||
| ||||
1 |
+6
2 | |
p | |
1 |
p2+3
2 | |
p | |
2 |
+8p1p3+6p4),\\ &~~\vdots\\ hk&=
\sum | |
m1+2m2+ … +kmk=k\atopm1\ge0,\ldots,mk\ge0 |
k | |
\prod | |
i=1 |
| ||||||||||
|
\end{align}
and so forth, in which there are only plus signs. In terms of the complete Bell polynomial,
hk=
1 | |
k! |
Bk(p1,1!p2,2!p3,\ldots,(k-1)!pk).
These expressions correspond exactly to the cycle index polynomials of the symmetric groups, if one interprets the power sums pi as indeterminates: the coefficient in the expression for hk of any monomial p1m1p2m2...plml is equal to the fraction of all permutations of k that have m1 fixed points, m2 cycles of length 2, ..., and ml cycles of length l. Explicitly, this coefficient can be written as
1/N
It can be proved by considering the following inductive step:
\begin{align} mf(m;m1,\ldots,mn)&=f(m-1;m1-1,\ldots,mn)+ … +f(m-n;m1,\ldots,mn-1)\\ m1\prod
n | |
i=1 |
1 | ||||||
|
+ … +nmn
n | |
\prod | |
i=1 |
1 | ||||||
|
&=
n | |
m\prod | |
i=1 |
1 | ||||||
|
\end{align}
By analogy with the derivation of the generating function of the
en
hn
infty | |
\sum | |
k=0 |
hktk=
infty | |
\exp\left(\sum | |
k=1 |
pk | |
k |
tk\right).
This generating function is thus the plethystic exponential of
p1t=(x1+ … +xn)t
One may also use Newton's identities to express power sums in terms of elementary symmetric polynomials, which does not introduce denominators:
\begin{align} p1&=e1,\\ p2&=
2 | |
e | |
1 |
-2e2,\\ p3&=
3 | |
e | |
1 |
-3e2e1+3e3,\\ p4&=
4 | |
e | |
1 |
-4e2
2 | |
e | |
1 |
+4e3e1+2
2 | |
e | |
2 |
-4e4,\\ p5&=
5 | |
e | |
1 |
-5e2
3 | |
e | |
1 |
+5e3
2 | |
e | |
1 |
+5
2 | |
e | |
2 |
e1-5e4e1-5e3e2+5e5,\\ p6&=
6 | |
e | |
1 |
-6e2
4 | |
e | |
1 |
+6e3
3 | |
e | |
1 |
+9
2 | |
e | |
2 |
2 | |
e | |
1 |
-6e4
2 | |
e | |
1 |
-12e3e2e1+6e5e1-2
3 | |
e | |
2 |
+3
2 | |
e | |
3 |
+6e4e2-6e6. \end{align}
The first four formulas were obtained by Albert Girard in 1629 (thus before Newton).[3]
The general formula (for all positive integers m) is:
pm=(-1)mm
\sum | |
r1+2r2+ … +mrm=m\atopr1\ge0,\ldots,rm\ge0 |
(r1+r2+ … +rm-1)! | |
r1!r2! … rm! |
m | |
\prod | |
i=1 |
ri | |
(-e | |
i) |
.
This can be conveniently stated in terms of ordinary Bell polynomials as
pm=(-1)mm
m | |
\sum | |
k=1 |
1 | |
k |
\hat{B}m,k(-e1,\ldots,-em-k+1),
or equivalently as the generating function:
infty | |
\begin{align} \sum | |
k=1 |
(-1)k-1pk
tk | |
k |
&=ln\left(1+e1t+e
3+ … \right) | |
3t |
\ &=e1t-
1 | |
2 |
2-2e | |
\left(e | |
2\right) |
t2+
1 | |
3 |
3-3e | |
\left(e | |
1e |
2+3e3\right)t3+ … ,\end{align}
The multiple summation formula above can be proved by considering the following inductive step:
\begin{align} f(m; r1,\ldots,rn) ={}&f(m-1; r1-1, … ,rn)+ … +f(m-n; r1,\ldots,rn-1)\\[8pt] ={}&
1 | |
(r1-1)! … rn! |
(m-1)(r1+ … +rn-2)!+ … \\ & … +
1 | |
r1! … (rn-1)! |
(m-n)(r1+ … +rn-2)!\\[8pt] ={}&
1 | |
r1! … rn! |
\left[r1(m-1)+ … +rn(m-n)\right]\left[r1+ … +rn-2\right]!\\[8pt] ={}&
1 | |
r1! … rn! |
\left[m(r1+ … +rn)-m\right]\left[r1+ … +rn-2\right]!\\[8pt] ={}&
m(r1+ … +rn-1)! | |
r1! … rn! |
\end{align}
Finally one may use the variant identities involving complete homogeneous symmetric polynomials similarly to express power sums in term of them:
\begin{align} p1&=+h1,\\ p2&=-
2 | |
h | |
1 |
+2h2,\\ p3&=+
3 | |
h | |
1 |
-3h2h1+3h3,\\ p4&=-
4 | |
h | |
1 |
+4h2
2 | |
h | |
1 |
-4h3h1-2
2 | |
h | |
2 |
+4h4,\\ p5&=+
5 | |
h | |
1 |
-5h2
3 | |
h | |
1 |
+5
2 | |
h | |
2 |
h1+5h3
2 | |
h | |
1 |
-5h3h2-5h4h1+5h5,\\ p6&=-
6 | |
h | |
1 |
+6h2
4 | |
h | |
1 |
-9
2 | |
h | |
2 |
2 | |
h | |
1 |
-6h3
3 | |
h | |
1 |
+2
3 | |
h | |
2 |
+12h3h2h1+6h4
2 | |
h | |
1 |
-3
2 | |
h | |
3 |
-6h4h2-6h1h5+6h6,\\ \end{align}
and so on. Apart from the replacement of each ei by the corresponding hi, the only change with respect to the previous family of identities is in the signs of the terms, which in this case depend just on the number of factors present: the sign of the monomial is −(−1)m1+m2+m3+.... In particular the above description of the absolute value of the coefficients applies here as well.
The general formula (for all non-negative integers m) is:
pm=
-\sum | |
r1+2r2+ … +mrm=m\atopr1\ge0,\ldots,rm\ge0 |
m(r1+r2+ … +rm-1)! | |
r1!r2! … rm! |
m | |
\prod | |
i=1 |
ri | |
(-h | |
i) |
One can obtain explicit formulas for the above expressions in the form of determinants, by considering the first n of Newton's identities (or it counterparts for the complete homogeneous polynomials) as linear equations in which the elementary symmetric functions are known and the power sums are unknowns (or vice versa), and apply Cramer's rule to find the solution for the final unknown. For instance taking Newton's identities in the form
\begin{align} e1&=1p1,\\ 2e2&=e1p1-1p2,\\ 3e3&=e2p1-e1p2+1p3,\\ &\vdots\\ nen&=en-1p1-en-2p2+ … +
ne | |
(-1) | |
1p |
n+(-1)npn \end{align}
we consider
p1,-p2,p3,\ldots,
np | |
(-1) | |
n-1 |
pn
\begin{align} pn={}&\begin{vmatrix} 1&0& … &&e1\\ e1&1&0& … &2e2\\ e2&e1&1&&3e3\\ \vdots&&\ddots&\ddots&\vdots\\ en& … &e2&e1&nen \end{vmatrix}\begin{vmatrix} 1&0& … &\\ e1&1&0& … \\ e2&e1&1&\\ \vdots&&\ddots&\ddots\\ en& … &e2&e1&1 \end{vmatrix}-1\\[7pt] ={(-1)n-1
Solving for
en
pn
\begin{align} en=
1{n!}&\begin{vmatrix} | |
p |
1&1&0& … \\ p2&p1&2&0& … \\ \vdots&&\ddots&\ddots\\ pn-1&pn-2& … &p1&n-1\\ pn&pn-1& … &p2&p1 \end{vmatrix}\\[7pt] pn=(-1)n-1&\begin{vmatrix} h1&1&0& … \\ 2h2&h1&1&0& … \\ 3h3&h2&h1&1\\ \vdots&&&\ddots&\ddots\\ nhn&hn-1& … &&h1 \end{vmatrix}\\[7pt] hn=
1{n!}&\begin{vmatrix} | |
p |
1&-1&0& … \\ p2&p1&-2&0& … \\ \vdots&&\ddots&\ddots\\ pn&pn-2& … &p1&1-n\\ pn&pn-1& … &p2&p1 \end{vmatrix}. \end{align}
Note that the use of determinants makes that the formula for
hn
en
hn
en
Each of Newton's identities can easily be checked by elementary algebra; however, their validity in general needs a proof. Here are some possible derivations.
One can obtain the k-th Newton identity in k variables by substitution into
k | |
\prod | |
i=1 |
(t-xi)=
k | |
\sum | |
i=0 |
(-1)k-iek-i(x1,\ldots,x
i | |
k)t |
as follows. Substituting xj for t gives
0=
k | |
\sum | |
i=0 |
(-1)k-iek-i(x1,\ldots,xk){x
i | |
j} |
for1\leqj\leqk
Summing over all j gives
0=
kke | |
(-1) | |
k(x |
1,\ldots,xk)+\sum
k(-1) | |
i=1 |
k-iek-i(x1,\ldots,xk)pi(x1,\ldots,xk),
where the terms for i = 0 were taken out of the sum because p0 is (usually) not defined. This equation immediately gives the k-th Newton identity in k variables. Since this is an identity of symmetric polynomials (homogeneous) of degree k, its validity for any number of variables follows from its validity for k variables. Concretely, the identities in n < k variables can be deduced by setting k − n variables to zero. The k-th Newton identity in n > k variables contains more terms on both sides of the equation than the one in k variables, but its validity will be assured if the coefficients of any monomial match. Because no individual monomial involves more than k of the variables, the monomial will survive the substitution of zero for some set of n − k (other) variables, after which the equality of coefficients is one that arises in the k-th Newton identity in k (suitably chosen) variables.
Another derivation can be obtained by computations in the ring of formal power series R, where R is Z[''x''<sub>1</sub>,..., ''x''<sub>''n''</sub>], the ring of polynomials in n variables x1,..., xn over the integers.
Starting again from the basic relation
n | |
\prod | |
i=1 |
(t-xi)=
n | |
\sum | |
k=0 |
(-1)kaktn-k
and "reversing the polynomials" by substituting 1/t for t and then multiplying both sides by tn to remove negative powers of t, gives
n | |
\prod | |
i=1 |
(1-xit)=
n | |
\sum | |
k=0 |
(-1)kaktk.
(the above computation should be performed in the field of fractions of R; alternatively, the identity can be obtained simply by evaluating the product on the left side)
Swapping sides and expressing the ai as the elementary symmetric polynomials they stand for gives the identity
n | |
\sum | |
k=0 |
(-1)kek(x1,\ldots,xn)
n | |
t | |
i=1 |
(1-xit).
One formally differentiates both sides with respect to t, and then (for convenience) multiplies by t, to obtain
\begin{align}
n | |
\sum | |
k=0 |
(-1)kkek(x1,\ldots,xn)tk&=t
n | |
\sum | |
i=1 |
\left[(-xi)\prod\nolimitsj(1-xjt)\right]\\ &=
n | |
-\left(\sum | |
i=1 |
xit | |
1-xit |
\right)
n | |
\prod\nolimits | |
j=1 |
(1-xjt)\\ &=
n | |
-\left[\sum | |
i=1 |
n | |
\sum | |
\ell=0 |
(-1)\elle\ell(x1,\ldots,xn)t\ell\right]\\ &=
infty | |
\left[\sum | |
j=1 |
pj(x1,\ldots,
j\right] | |
x | |
n)t |
n | |
\left[\sum | |
\ell=0 |
(-1)\elle\ell(x1,\ldots,xn)t\ell\right],\\ \end{align}
where the polynomial on the right hand side was first rewritten as a rational function in order to be able to factor out a product out of the summation, then the fraction in the summand was developed as a series in t, using the formula
X | |
1-X |
=X+X2+X3+X4+X5+ … ,
and finally the coefficient of each t j was collected, giving a power sum. (The series in t is a formal power series, but may alternatively be thought of as a series expansion for t sufficiently close to 0, for those more comfortable with that; in fact one is not interested in the function here, but only in the coefficients of the series.) Comparing coefficients of tk on both sides one obtains
(-1)kkek(x1,\ldots,xn)=
k | |
\sum | |
j=1 |
(-1)k-j-1pj(x1,\ldots,xn)ek-j(x1,\ldots,xn),
which gives the k-th Newton identity.
The following derivation, given essentially in (Mead, 1992), is formulated in the ring of symmetric functions for clarity (all identities are independent of the number of variables). Fix some k > 0, and define the symmetric function r(i) for 2 ≤ i ≤ k as the sum of all distinct monomials of degree k obtained by multiplying one variable raised to the power i with k − i distinct other variables (this is the monomial symmetric function mγ where γ is a hook shape (i,1,1,...,1)). In particular r(k) = pk; for r(1) the description would amount to that of ek, but this case was excluded since here monomials no longer have any distinguished variable. All products piek−i can be expressed in terms of the r(j) with the first and last case being somewhat special. One has
piek-i=r(i)+r(i+1) for1<i<k
pke0=pk=r(k).
p1ek-1=kek+r(2).
A short combinatorial proof of Newton's Identities is given in (Zeilberger, 1984)[4]