In mathematics, specifically in algebraic combinatorics and commutative algebra, the complete homogeneous symmetric polynomials are a specific kind of symmetric polynomials. Every symmetric polynomial can be expressed as a polynomial expression in complete homogeneous symmetric polynomials.
The complete homogeneous symmetric polynomial of degree in variables, written for, is the sum of all monomials of total degree in the variables. Formally,
hk(X1,X2,...,Xn)=
\sum | |
1\leqi1\leqi2\leq … \leqik\leqn |
X | |
i1 |
X | |
i2 |
…
X | |
ik |
.
The formula can also be written as:
hk(X1,X2,...,Xn)=
\sum | |
l1+l2+ … +ln=k\atopli\geq0 |
l1 | |
X | |
1 |
l2 | |
X | |
2 |
…
ln | |
X | |
n |
.
The first few of these polynomials are
\begin{align} h0(X1,X2,...,Xn)&=1,\\[10px] h1(X1,X2,...,Xn)&=\sum1Xj,\\ h2(X1,X2,...,Xn)&=\sum1XjXk,\\ h3(X1,X2,...,Xn)&=\sum1XjXkXl. \end{align}
Thus, for each nonnegative integer, there exists exactly one complete homogeneous symmetric polynomial of degree in variables.
Another way of rewriting the definition is to take summation over all sequences, without condition of ordering :
hk(X1,X2,...,Xn)=
\sum | |
1\leqi1,i2, … ,ik\leqn |
m1!m2! … mn! | |
k! |
X | |
i1 |
X | |
i2 |
…
X | |
ik |
,
For example
h2(X1,X2)=
2!0! | |
2! |
2 | ||
X | + | |
1 |
1!1! | |
2! |
X1X2+
1!1! | |
2! |
X2X1+
0!2! | |
2! |
2 | |
X | |
2 |
=
2+X | |
X | |
1X |
2+X
2. | |
2 |
The polynomial ring formed by taking all integral linear combinations of products of the complete homogeneous symmetric polynomials is a commutative ring.
The following lists the basic (as explained below) complete homogeneous symmetric polynomials for the first three positive values of .
For :
h1(X1)=X1.
For :
\begin{align} h1(X1,X2)&=X1+X2\\ h2(X1,X2)&=
2 | |
X | |
1 |
+X1X2+
2. \end{align} | |
X | |
2 |
For :
\begin{align} h1(X1,X2,X3)&=X1+X2+X3\\ h2(X1,X2,X3)&=
2 | |
X | |
1 |
+
2 | |
X | |
2 |
+
2 | |
X | |
3 |
+X1X2+X1X3+X2X3\\ h3(X1,X2,X3)&=
3 | |
X | |
3 |
+
2X | |
X | |
2+X |
2X | |
3+X |
2X | |
1+X |
2X | |
3+X |
2X | |
1+X |
2X | |
2 |
+X1X2X3. \end{align}
The complete homogeneous symmetric polynomials are characterized by the following identity of formal power series in :
infty | |
\sum | |
k=0 |
hk(X1,\ldots,X
k | |
n)t |
=
j | |
\prod | |
it) |
=
| ||||
\prod | ||||
i=1 |
The formula above can be seen as a special case of the MacMahon master theorem. The right hand side can be interpreted as
1/\det(1-tM)
t\inR
M=diag(X1,\ldots,XN)
Performing some standard computations, we can also write the generating function as which is the power series expansion of the plethystic exponential of
(X1+ … +Xn)t
pj:=X
j | |
n |
There is a fundamental relation between the elementary symmetric polynomials and the complete homogeneous ones:
m(-1) | |
\sum | |
i=0 |
ie | |
i(X |
1,\ldots,Xn)hm-i(X1,\ldots,Xn)=0,
which is valid for all, and any number of variables . The easiest way to see that it holds is from an identity of formal power series in for the elementary symmetric polynomials, analogous to the one given above for the complete homogeneous ones, which can also be written in terms of plethystic exponentials as:
infty | |
\sum | |
k=0 |
ek(X1,\ldots,X
k | |
n)(-t) |
=
n(1-X | |
\prod | |
it) |
=PE[-(X1+ … +Xn)t]
(this is actually an identity of polynomials in, because after the elementary symmetric polynomials become zero). Multiplying this by the generating function for the complete homogeneous symmetric polynomials, one obtains the constant series 1 (equivalently, plethystic exponentials satisfy the usual properties of an exponential), and the relation between the elementary and complete homogeneous polynomials follows from comparing coefficients of . A somewhat more direct way to understand that relation is to consider the contributions in the summation involving a fixed monomial of degree . For any subset of the variables appearing with nonzero exponent in the monomial, there is a contribution involving the product of those variables as term from, where, and the monomial from ; this contribution has coefficient . The relation then follows from the fact that
l\binom{l}{s}(-1) | |
\sum | |
s=0 |
s=(1-1)l=0 forl>0,
by the binomial formula, where denotes the number of distinct variables occurring (with nonzero exponent) in . Since and are both equal to 1, one can isolate from the relation either the first or the last terms of the summation. The former gives a sequence of equations:
\begin{align} h1(X1,\ldots,Xn)&=e1(X1,\ldots,Xn),\\ h2(X1,\ldots,Xn)&=h1(X1,\ldots,Xn)e1(X1,\ldots,Xn)-e2(X1,\ldots,Xn),\\ h3(X1,\ldots,Xn)&=h2(X1,\ldots,Xn)e1(X1,\ldots,Xn)-h1(X1,\ldots,Xn)e2(X1,\ldots,Xn)+e3(X1,\ldots,Xn),\\ \end{align}
and so on, that allows to recursively express the successive complete homogeneous symmetric polynomials in terms of the elementary symmetric polynomials; the latter gives a set of equations
\begin{align} e1(X1,\ldots,Xn)&=h1(X1,\ldots,Xn),\\ e2(X1,\ldots,Xn)&=h1(X1,\ldots,Xn)e1(X1,\ldots,Xn)-h2(X1,\ldots,Xn),\\ e3(X1,\ldots,Xn)&=h1(X1,\ldots,Xn)e2(X1,\ldots,Xn)-h2(X1,\ldots,Xn)e1(X1,\ldots,Xn)+h3(X1,\ldots,Xn),\\ \end{align}
and so forth, that allows doing the inverse. The first elementary and complete homogeneous symmetric polynomials play perfectly similar roles in these relations, even though the former polynomials then become zero, whereas the latter do not. This phenomenon can be understood in the setting of the ring of symmetric functions. It has a ring automorphism that interchanges the sequences of the elementary and first complete homogeneous symmetric functions.
The set of complete homogeneous symmetric polynomials of degree 1 to in variables generates the ring of symmetric polynomials in variables. More specifically, the ring of symmetric polynomials with integer coefficients equals the integral polynomial ring
Z[h1(X1,\ldots,Xn),\ldots,hn(X1,\ldots,Xn)].
h1(X1,\ldots,Xn),\ldots,hn(X1,\ldots,Xn)
Z
The evaluation at integers of complete homogeneous polynomials and elementary symmetric polynomials is related to Stirling numbers:
\begin{align} hn(1,2,\ldots,k)&=\left\{\begin{matrix}n+k\ k\end{matrix}\right\}\\ en(1,2,\ldots,k)&=\left[{k+1\atopk+1-n}\right]\\ \end{align}
The polynomial is also the sum of all distinct monomial symmetric polynomials of degree in, for instance
\begin{align} h3(X1,X2,X3)&=m(3)(X1,X2,X3)+m(2,1)(X1,X2,X3)+m(1,1,1)(X1,X2,X3)\\
2X | |
&=\left(X | |
2+X |
2X | |
3+X |
1X
2+X | |
1X |
2X | |
3+X |
2X
2\right)+(X | |
1X |
2X3).\\ \end{align}
Newton's identities for homogeneous symmetric polynomials give the simple recursive formula
khk=
kh | |
\sum | |
k-i |
pi,
hk=hk(X1,...,Xn)
pk(X1,\ldots,Xn)=\sum\nolimits
k | |
i |
=
k | |
X | |
n |
For small
k
\begin{align} h1&=p1,\\ 2h2&=h1p1+p2,\\ 3h3&=h2p1+h1p2+p3.\ \end{align}
Consider an -dimensional vector space and a linear operator with eigenvalues . Denote by its th symmetric tensor power and the induced operator .
Proposition:
k(V)} | |
\operatorname{Trace} | |
\operatorname{Sym |
\left(M\operatorname{Sym(k)}\right)=hk(X1,X2,\ldots,Xn).
The proof is easy: consider an eigenbasis for . The basis in can be indexed by sequences, indeed, consider the symmetrizations of
e | |
i1 |
⊗
e | |
i2 |
⊗ \ldots ⊗
e | |
ik |
X | |
i1 |
X | |
i2 |
…
X | |
ik |
,
Similarly one can express elementary symmetric polynomials via traces over antisymmetric tensor powers. Both expressions are subsumed in expressions of Schur polynomials as traces over Schur functors, which can be seen as the Weyl character formula for .
If we replace the variables
Xi
1+Xi
hk(1+X1,\ldots,1+Xn)
hj(X1,\ldots,Xn)
0\lej\lek
hk(1+X1,\ldots,1+Xn)=
k | |
\sum | |
j=0 |
\binom{n+k-1}{k-j}hj(X1,\ldots,Xn).
The proof, as found in Lemma 3.5 of,[1] relies on the combinatorial properties of increasing
k
(i1,\ldots,ik)
1\lei1\le … \leik\len