ak{gl}n
Suppose that xij for i,j = 1,...,n are commuting variables. Write Eij for the polarization operator
Eij=
n | |
\sum | |
a=1 |
xia
\partial | |
\partialxja |
.
The Capelli identity states that the following differential operators, expressed as determinants, are equal:
\begin{vmatrix}E11+n-1& … &E1,n-1&E1n\ \vdots&\ddots&\vdots&\vdots\ En-1,1& … &En-1,n-1+1&En-1,n\ En1& … &En,n-1&Enn+0\end{vmatrix}= \begin{vmatrix}x11& … &x1n\ \vdots&\ddots&\vdots\ xn1& … &xnn\end{vmatrix} \begin{vmatrix}
\partial | |
\partialx11 |
& … &
\partial | |
\partialx1n |
\ \vdots&\ddots&\vdots\
\partial | |
\partialxn1 |
& … &
\partial | |
\partialxnn |
\end{vmatrix}.
Both sides are differential operators. The determinant on the left has non-commuting entries, and is expanded with all terms preserving their "left to right" order. Such a determinant is often called a column-determinant, since it can be obtained by the column expansion of the determinant starting from the first column. It can be formally written as
\det(A)=
\sum | |
\sigma\inSn |
sgn(\sigma)A\sigma(1),1A\sigma(2),2 … A\sigma(n),n,
where in the product first come the elements from the first column, then from the second and so on. The determinant on the far right is Cayley's omega process, and the one on the left is the Capelli determinant.
The operators Eij can be written in a matrix form:
E=XDt,
where
E,X,D
\partial | |
\partialxij |
\det(E)=\det(X)\det(Dt)
(n-i)\deltaij
\det(AB)=\det(A)\det(B)
Consider the following slightly more general context. Suppose that
n
m
xij
i=1,...,n, j=1,...,m
Eij
Eij=
m | |
\sum | |
a=1 |
xia
\partial | |
\partialxja |
.
with the only difference that summation index
a
1
m
[Eij,Ekl]=\deltajkEil-\deltailEkj.~~~~~~~~~
Here
[a,b]
ab-ba
eij
(i,j)
eij
\pi:eij\mapstoEij
ak{gl}n
xij
It is especially instructive to consider the special case m = 1; in this case we have xi1, which is abbreviated as xi:
Eij=xi
\partial | |
\partialxj |
.
In particular, for the polynomials of the first degree it is seen that:
Eijxk=\deltajkxi.~~~~~~~~~~~~~~
Hence the action of
Eij
eij
Cn
ak{gl}n
Cn
Eij
ak{gl}n
SkCn
Cn
One can also easily identify the highest weight structure of these representations. The monomial
k | |
x | |
1 |
Eij
k | |
x | |
1=0 |
Eii
k | |
x | |
1= |
k\deltai1
k | |
x | |
1 |
Such representation is sometimes called bosonic representation of
ak{gl}n
Eij=\psii
\partial | |
\partial\psij |
\psii
ΛkCn
Cn
ak{gl}n
Let us return to the Capelli identity. One can prove the following:
\det(E+(n-i)\deltaij)=0, n>1
the motivation for this equality is the following: consider
c | |
E | |
ij |
=xipj
xi,pj
Ec
E
\det(Ec)=0
E
(n-i)\deltaij
Let us also mention that similar identity can be given for the characteristic polynomial:
\det(t+E+(n-i)\deltaij)=t[n]+Tr(E)t[n-1],~~~~
where
t[k]=t(t+1) … (t+k-1)
Consider an example for n = 2.
\begin{align} &\begin{vmatrix}t+E11+1&E12\\ E21&t+E22\end{vmatrix} =\begin{vmatrix}t+x1\partial1+1&x1\partial2\\ x2\partial1&t+x2\partial2 \end{vmatrix}\\[8pt] &=(t+x1\partial1+1)(t+x2\partial2)-x2\partial1x1\partial2\\[6pt] &=t(t+1)+t(x1\partial1+x2\partial2)+x1\partial1x2\partial2+x2\partial2- x2\partial1x1\partial2 \end{align}
Using
\partial1x1=x1\partial1+1,\partial1x2=x2\partial1,x1x2=x2x1
we see that this is equal to:
\begin{align} &{} t(t+1)+t(x1\partial1+x2\partial2)+x2x1\partial1\partial2+x2\partial2- x2x1\partial1\partial2-x2\partial2\\[8pt] &=t(t+1)+t(x1\partial1+x2
[2] | |
\partial | |
2)=t |
+tTr(E). \end{align}
U(ak{gl}n)
[Eij,\det(E+(n-i)\deltaij)]=0
Consider any elements Eij in any ring, such that they satisfy the commutation relation
[Eij,Ekl]=\deltajkEil-\deltailEkj
\det(t+E+(n-i)\deltaij)=t[n]+\sumk=n-1,...,0t[k]Ck,~~~~~
where
t[k]=t(t+1) … (t+k-1),
then:
Ck=\sum
I=(i1<i2< … <ik) |
\det(E+(k-i)\deltaij)II,
i.e. they are sums of principal minors of the matrix E, modulo the Capelli correction
+(k-i)\deltaij
These statements are interrelated with the Capelli identity, as will be discussed below, and similarly to it the direct few lines short proof does not seem to exist, despite the simplicity of the formulation.
U(ak{gl}n)
[Eij,Ekl]=\deltajkEil-\deltailEkj
alone. The proposition above shows that elements Ck belong to the center of
U(ak{gl}n)
U(ak{gl}n)
Consider an example for n = 2.
\begin{align} {} \begin{vmatrix}t+E11+1&E12\\ E21&t+E22\end{vmatrix} &=(t+E11+1)(t+E22)-E21E12\\ &=t(t+1)+t(E11+E22)+E11E22-E21E12+E22.\end{align}
It is immediate to check that element
(E11+E22)
Eij
Eij
E12
[E12,E11E22-E21E12+E22]
=[E12,E11]E22+E11[E12,E22]- [E12,E21]E12-E21[E12,E12]+[E12,E22]
=-E12E22+E11E12- (E11-E22)E12-0+E12
=-E12E22+E22E12+E12=-E12+E12=0.
We see that the naive determinant
E11E22-E21E12
E12
+E22
Let us return to the general case:
Eij=
m | |
\sum | |
a=1 |
xia
\partial | |
\partialxja |
,
E=XDt
E
n x n
Eij
X
n x m
xij
D
n x m
\partial | |
\partialxij |
Capelli–Cauchy–Binet identities
For general m matrix E is given as product of the two rectangular matrices: X and transpose to D. If all elements of these matrices would commute then one knows that the determinant of E can be expressed by the so-called Cauchy–Binet formula via minors of X and D. An analogue of this formula also exists for matrix E again for the same mild price of the correction
E → (E+(n-i)\deltaij)
\det(E+(n-i)\deltaij)=
\sum | |
I=(1\lei1<i2< … <in\lem) |
\det(XI)
t | |
\det(D | |
I |
)
In particular (similar to the commutative case): if m < n, then
\det(E+(n-i)\deltaij)=0
Let us also mention that similar to the commutative case (see Cauchy–Binet for minors), one can express not only the determinant of E, but also its minors via minors of X and D:
\det(E+(s-i)\deltaij)KL=
\sum | |
I=(1\lei1<i2< … <is\lem) |
\det(XKI)
t | |
\det(D | |
IL |
)
Here K = (k1 < k2 < ... < ks), L = (l1 < l2 < ... < ls), are arbitrary multi-indexes; as usually
MKL
As a corollary of this formula and the one for the characteristic polynomial in the previous section let us mention the following:
\det(t+E+(n-i)\deltaij)=t[n]+\sumk=n-1,...,0t[k]\sumI,J\det(XIJ)
t | |
\det(D | |
JI |
),
where
I=(1\lei1< … <ik\len),
J=(1\lej1< … <jk\len)
+(n-i)\deltaij
Relation to dual pairs
Modern interest in these identities has been much stimulated by Roger Howe who considered them in his theory of reductive dual pairs (also known as Howe duality). To make the first contact with these ideas, let us look more precisely on operators
Eij
Eijxkl=xil\deltajk
Cn ⊕ … ⊕ Cn
xil
Cn ⊕ … ⊕ Cn=Cn ⊗ Cm.
Such point of view gives the first hint of symmetry between m and n. To deepen this idea consider:
dual | |
E | |
ij |
=
n | |
\sum | |
a=1 |
xai
\partial | |
\partialxaj |
.
These operators are given by the same formulas as
Eij
i\leftrightarrowj
dual | |
E | |
ij |
ak{gl}m
dual | |
E | |
ij |
Ekl
GLn x GLm
Cn ⊗ Cm
ak{gl}n x ak{gl}m
Eij~~~~
dual | |
E | |
ij |
The following deeper properties actually hold true:
Eij~~~~
dual | |
E | |
ij |
GLn
GLm
C[xij]=S(Cn ⊗ Cm)=\sumD
D | |
\rho | |
n |
D' | |
⊗ \rho | |
m |
.
The summands are indexed by the Young diagrams D, and representations
\rhoD
{D}
{D'}
GLn x GLm
One easily observe the strong similarity to Schur–Weyl duality.
Much work have been done on the identity and its generalizations. Approximately two dozens of mathematicians and physicists contributed to the subject, to name a few: R. Howe, B. Kostant Fields medalist A. Okounkov A. Sokal, D. Zeilberger.
It seems historically the first generalizations were obtained by Herbert Westren Turnbull in 1948, who found the generalization for the case of symmetric matrices (see for modern treatments).
The other generalizations can be divided into several patterns. Most of them are based on the Lie algebra point of view. Such generalizations consist of changing Lie algebra
ak{gl}n
Consider symmetric matrices
X=\begin{vmatrix}x11&x12&x13& … &x1n\ x12&x22&x23& … &x2n\\ x13&x23&x33& … &x3n\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ x1n&x2n&x3n& … &xnn\end{vmatrix}, D=\begin{vmatrix}2
\partial | |
\partialx11 |
&
\partial | |
\partialx12 |
&
\partial | |
\partialx13 |
& … &
\partial | \\[6pt] | |
\partialx1n |
\partial | |
\partialx12 |
&2
\partial | |
\partialx22 |
&
\partial | |
\partialx23 |
& … &
\partial | \\[6pt] | |
\partialx2n |
\partial | |
\partialx13 |
&
\partial | |
\partialx23 |
&2
\partial | |
\partialx33 |
& … &
\partial | |
\partialx3n |
\\[6pt] \vdots&\vdots&\vdots&\ddots&\vdots\\
\partial | |
\partialx1n |
&
\partial | |
\partialx2n |
&
\partial | |
\partialx3n |
& … &2
\partial | |
\partialxnn |
\end{vmatrix}
Herbert Westren Turnbull in 1948 discovered the following identity:
\det(XD+(n-i)\deltaij)=\det(X)\det(D)
Combinatorial proof can be found in the paper, another proof and amusing generalizations in the paper, see also discussion below.
Consider antisymmetric matrices
X=\begin{vmatrix}0&x12&x13& … &x1n\ -x12&0&x23& … &x2n\\ -x13&-x23&0& … &x3n\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ -x1n&-x2n&-x3n& … &0 \end{vmatrix}, D=\begin{vmatrix}0&
\partial | |
\partialx12 |
&
\partial | |
\partialx13 |
& … &
\partial | \\[6pt] - | |
\partialx1n |
\partial | |
\partialx12 |
&0&
\partial | |
\partialx23 |
& … &
\partial | \\[6pt] - | |
\partialx2n |
\partial | |
\partialx13 |
&-
\partial | |
\partialx23 |
&0& … &
\partial | |
\partialx3n |
\\[6pt] \vdots&\vdots&\vdots&\ddots&\vdots\\[6pt] -
\partial | |
\partialx1n |
&-
\partial | |
\partialx2n |
&-
\partial | |
\partialx3n |
& … &0\end{vmatrix}.
Then
\det(XD+(n-i)\deltaij)=\det(X)\det(D).
Consider two matrices M and Y over some associative ring which satisfy the following condition
[Mij,Ykl]=-\deltajkQil~~~~~
for some elements Qil. Or ”in words”: elements in j-th column of M commute with elements in k-th row of Y unless j = k, and in this case commutator of the elements Mik and Ykl depends only on i, l, but does not depend on k.
Assume that M is a Manin matrix (the simplest example is the matrix with commuting elements).
Then for the square matrix case
\det(MY+Qdiag(n-1,n-2,...,1,0))=\det(M)\det(Y).~~~~~~~
Here Q is a matrix with elements Qil, and diag(n - 1, n - 2, ..., 1, 0) means the diagonal matrix with the elements n - 1, n - 2, ..., 1, 0 on the diagonal.
See proposition 1.2' formula (1.15) page 4, our Y is transpose to their B.
Obviously the original Cappeli's identity the particular case of this identity. Moreover from this identity one can see that in the original Capelli's identity one can consider elements
\partial | |
\partialxij |
+fij(x11,...,xkl,...)
for arbitrary functions fij and the identity still will be true.
Consider matrices X and D as in Capelli's identity, i.e. with elements
xij
\partialij
Let z be another formal variable (commuting with x). Let A and B be some matrices which elements are complex numbers.
\det\left(
\partial | |
\partialz |
-A-X
1 | |
z-B |
Dt\right)
calculateasifallcommute | |
={\det} | |
Putallxandzontheleft,whileallderivationsontheright |
\left(
\partial | |
\partialz |
-A-X
1 | |
z-B |
Dt\right)
Here the first determinant is understood (as always) as column-determinant of a matrix with non-commutative entries. The determinant on the right is calculated as if all the elements commute, and putting all x and z on the left, while derivations on the right. (Such recipe is called a Wick ordering in the quantum mechanics).
The matrix
L(z)=A+X
1 | |
z-B |
Dt
is a Lax matrix for the Gaudin quantum integrable spin chain system. D. Talalaev solved the long-standing problem of the explicit solution for the full set of the quantum commuting conservation laws for the Gaudin model, discovering the following theorem.
Consider
\det\left( | \partial |
\partialz |
-L(z)\right)
n | |
=\sum | |
i=0 |
Hi(z)\left(
\partial | |
\partialz |
\right)i.
Then for all i,j,z,w
[Hi(z),Hj(w)]=0,~~~~~~~~
The original Capelli identity is a statement about determinants. Later, analogous identities were found for permanents, immanants and traces.Based on the combinatorial approach paper by S.G. Williamson was one of the first results in this direction.
Consider the antisymmetric matrices X and D with elements xij and corresponding derivations, as in the case of the HUKS identity above.
Then
perm(XtD-(n-i)\deltaij)=
Calculateasifallcommute | |
perm | |
Putallxontheleft,withallderivationsontheright |
(XtD).
Let us cite: "...is stated without proof at the end of Turnbull’s paper". The authors themselves follow Turnbull – at the very end of theirpaper they write:
"Since the proof of this last identity is very similar to the proof of Turnbull’s symmetric analog (with a slight twist), we leave it as an instructive and pleasant exercise for the reader.".
The identity is deeply analyzed in paper.