Derivative of the exponential map explained

In the theory of Lie groups, the exponential map is a map from the Lie algebra of a Lie group into . In case is a matrix Lie group, the exponential map reduces to the matrix exponential. The exponential map, denoted, is analytic and has as such a derivative, where is a path in the Lie algebra, and a closely related differential .[1]

The formula for was first proved by Friedrich Schur (1891). It was later elaborated by Henri Poincaré (1899) in the context of the problem of expressing Lie group multiplication using Lie algebraic terms. It is also sometimes known as Duhamel's formula.

The formula is important both in pure and applied mathematics. It enters into proofs of theorems such as the Baker - Campbell - Hausdorff formula, and it is used frequently in physics for example in quantum field theory, as in the Magnus expansion in perturbation theory, and in lattice gauge theory.

Throughout, the notations and will be used interchangeably to denote the exponential given an argument, except when, where as noted, the notations have dedicated distinct meanings. The calculus-style notation is preferred here for better readability in equations. On the other hand, the -style is sometimes more convenient for inline equations, and is necessary on the rare occasions when there is a real distinction to be made.

Statement

The derivative of the exponential map is given by[2]

Explanation

To compute the differential of at,, the standard recipe[1]

d\expXY=\left.

d
dt

eZ(t)\right|t,Z(0)=X,Z'(0)=Y

is employed. With the result[2]

follows immediately from . In particular, is the identity because (since is a vector space) and .

Proof

The proof given below assumes a matrix Lie group. This means that the exponential mapping from the Lie algebra to the matrix Lie group is given by the usual power series, i.e. matrix exponentiation. The conclusion of the proof still holds in the general case, provided each occurrence of is correctly interpreted. See comments on the general case below.

The outline of proof makes use of the technique of differentiation with respect to of the parametrized expression

\Gamma(s,t)=e-sX(t)

\partial
\partialt

esX(t)

to obtain a first order differential equation for which can then be solved by direct integration in . The solution is then .

Lemma

Let denote the adjoint action of the group on its Lie algebra. The action is given by for . A frequently useful relationship between and is given by[3] [4]

Proof

Using the product rule twice one finds,

\partial\Gamma
\partials

=e-sX(-X)

\partial
\partialt

esX(t)+e-sX

\partial
\partialt

\left[X(t)esX(t)\right]=e-sX

dX
dt

esX.

Then one observes that
\partial\Gamma
\partials

=

Ad
e-sX

X'=

-adsX
e

X',

by above. Integration yields

\Gamma(1,t)=e-X(t)

\partial
\partialt

eX(t)=

1
\int
0
\partial\Gamma
\partials

ds=

1
\int
0
-adsX
e

X'ds.

Using the formal power series to expand the exponential, integrating term by term, and finally recognizing,

\Gamma(1,t)=

1
\int
0
infty
\sum
k=0
(-1)ksk
k!
kdX
dt
(ad
X)

ds=

infty
\sum
k=0
(-1)k
(k+1)!
k
(ad
X)
dX
dt

=

-adX
1-e
adX
dX
dt

,

and the result follows. The proof, as presented here, is essentially the one given in . A proof with a more algebraic touch can be found in .[5]

Comments on the general case

The formula in the general case is given by[6]

d
dt

\exp(C(t))=\exp(C)\phi(-ad(C))C~',

where[7]

\phi(z)=

ez-1
z

=1+

1
2!

z+

1
3!

z2+,

which formally reduces to
d
dt

\exp(C(t))=\exp(C)

1-
-adC
e
adC
dC(t)
dt

.

Here the -notation is used for the exponential mapping of the Lie algebra and the calculus-style notation in the fraction indicates the usual formal series expansion. For more information and two full proofs in the general case, see the freely available reference.

A direct formal argument

An immediate way to see what the answer must be, provided it exists is the following. Existence needs to be proved separately in each case. By direct differentiation of the standard limit definition of the exponential, and exchanging the order of differentiation and limit,

\begin{align}

d
dt

eX(t)&=\limN

d
dt

\left(1+

X(t)
N

\right)N\\ &=\limN

N\left(1
\sum
k=1

+

X(t)
N

\right)N-k

1
N
dX(t)
dt

\left(1+

X(t)
N

\right)k-1~, \end{align}

where each factor owes its place to the non-commutativity of and .

Dividing the unit interval into sections (since the sum indices are integers) and letting → ∞,, yields

\begin{align}

d
dt

eX(t)&=

1e
\int
0

(1-s)XX'esXds=eX

1
\int
0
Ad
e-sX

X'ds\\ &=eX

1
\int
0
-adsX
e

dsX'=eX

-adX
1-e
adX
dX
dt

~. \end{align}

Applications

Local behavior of the exponential map

The inverse function theorem together with the derivative of the exponential map provides information about the local behavior of . Any map between vector spaces (here first considering matrix Lie groups) has a inverse such that is a bijection in an open set around a point in the domain provided is invertible. From it follows that this will happen precisely when

1-
adX
e
adX

is invertible. This, in turn, happens when the eigenvalues of this operator are all nonzero. The eigenvalues of are related to those of as follows. If is an analytic function of a complex variable expressed in a power series such that for a matrix converges, then the eigenvalues of will be, where are the eigenvalues of, the double subscript is made clear below.[8] In the present case with and, the eigenvalues of are

1-
ij
e
λij

,

where the are the eigenvalues of . Putting one sees that is invertible precisely when

λij\nek2\pii,k=\pm1,\pm2,\ldots.

The eigenvalues of are, in turn, related to those of . Let the eigenvalues of be . Fix an ordered basis of the underlying vector space such that is lower triangular. Then

Xei=λiei+,

with the remaining terms multiples of with . Let be the corresponding basis for matrix space, i.e. . Order this basis such that if . One checks that the action of is given by

adXEij=(λi-λj)Eij+\equivλijEij+,

with the remaining terms multiples of . This means that is lower triangular with its eigenvalues on the diagonal. The conclusion is that is invertible, hence is a local bianalytical bijection around, when the eigenvalues of satisfy[9] [10]

λi-λj\nek2\pii,k=\pm1,\pm2,\ldots,1\lei,j\len=\dimV.

In particular, in the case of matrix Lie groups, it follows, since is invertible, by the inverse function theorem that is a bi-analytic bijection in a neighborhood of in matrix space. Furthermore,, is a bi-analytic bijection from a neighborhood of in to a neighborhood of .[11] The same conclusion holds for general Lie groups using the manifold version of the inverse function theorem.

It also follows from the implicit function theorem that itself is invertible for sufficiently small.[12]

Derivation of a Baker–Campbell–Hausdorff formula

See main article: Baker–Campbell–Hausdorff formula. If is defined such that

eZ(t)=eXetY,

an expression for, the Baker–Campbell–Hausdorff formula, can be derived from the above formula,
\exp(-Z(t))d
dt

\exp(Z(t))=

1-
-adZ
e
adZ

Z'(t).

Its left-hand side is easy to see to equal Y. Thus,

Y=

1-
-adZ
e
adZ

Z'(t),

and hence, formally,[13] [14]

Z'(t)=

adZ
1-
-adZ
e

Y\equiv

adZ
\psi\left(e

\right)Y, \psi(w)=

wlogw
w-1

=1+

infty
\sum
m=1
(-1)m
m(m+1)

(w-1)m,\|w\|<1.

However, using the relationship between and given by, it is straightforward to further see that

adZ
e

=

adX
e
tadY
e

and hence

Z'(t)=

adX
\psi\left(e
tadY
e

\right)Y.

Putting this into the form of an integral in t from 0 to 1 yields,

Z(1)=log(\expX\expY)=X+\left(

1
\int
0

\psi

\operatorname{ad
\left(e
X}

~

tadY
e

\right)dt\right)Y,

an integral formula for that is more tractable in practice than the explicit Dynkin's series formula due to the simplicity of the series expansion of . Note this expression consists of and nested commutators thereof with or . A textbook proof along these lines can be found in and .

Derivation of Dynkin's series formula

Dynkin's formula mentioned may also be derived analogously, starting from the parametric extension

eZ(t)=etXetY,

whence

e-Z(t)

deZ(t)
dt

=

-tadY
e

X+Y~,

so that, using the above general formula,

Z'=

adZ
1-
-adZ
e

~

-tadY
\left(e

X+Y\right)=

adZ
adZ
e-1

~\left(X+

tadX
e

Y\right).

Since, however,

\begin{align}

adZ

&=log\left(\exp\left(adZ\right)\right)=log\left(1+\left(\exp\left(adZ\right)-1\right)\right)\\ &=

infty
\sum\limits
n=1
(-1)n+1
n

(\exp(adZ)-1)n~,\|adZ\|<log2~~, \end{align}

the last step by virtue of the Mercator series expansion, it follows thatand, thus, integrating,

Z(1)=

1
\int
0

dt~

dZ(t)
dt

=

infty
\sum
n=1
(-1)n-1
n
1
\int
0

dt

tadX
~\left(e
tadY
e

-1\right)n-1~\left(X+

tadX
e

Y\right).

It is at this point evident that the qualitative statement of the BCH formula holds, namely lies in the Lie algebra generated by and is expressible as a series in repeated brackets . For each, terms for each partition thereof are organized inside the integral . The resulting Dynkin's formula is then For a similar proof with detailed series expansions, see .

Combinatoric details

Change the summation index in to and expandin a power series. To handle the series expansions simply, consider first . The -series and the -series are given by

log(A)=

infty
\sum
k=1
(-1)k
k

{(A-I)}k,andeX=

infty
\sum
k=0
Xk
k!
respectively. Combining these one obtainsThis becomeswhere is the set of all sequences of length subject to the conditions in .

Now substitute for in the LHS of . Equation then gives

\begin{align}

dZ
dt

=

infty
\sum
k=0
(-1)k
k+1
\sum
s\inSk,ik+1\ge0
i1+j1+ … +ik+jk
&t
{adX
i1

{adY

}^\cdots ^^}X \\ + &t^\fracY, \quad i_r,j_r \ge 0, \quad i_r + j_r > 0,\quad 1 \le r \le k,\endor, with a switch of notation, see An explicit Baker - Campbell - Hausdorff formula,

\begin{align}

dZ
dt

=

infty
\sum
k=0
(-1)k
k+1
\sum
s\inSk,ik+1\ge0
i1+j1+ … +ik+jk
&t
(i1)
\left[X
(j1)
Y
(ik)
X
(jk)
Y
X\right]
i1!j1!ik!jk!

\\ {}+{}

i1+j1+ … +ik+jk+ik+1
&t
(i1)
\left[X
(j1)
Y
(ik)
X
(jk)
Y
(ik+1)
X
Y\right]
i1!j1!ik!jk!ik+1!

,ir,jr\ge0,ir+jr>0,1\ler\lek \end{align}.

Note that the summation index for the rightmost in the second term in is denoted, but is not an element of a sequence . Now integrate, using,

\begin{align} Z=

infty
\sum
k=0
(-1)k
k+1
\sum&
s\inSk,ik+1\ge0
1
i1+j1+ … +ik+jk+1
(i1)
\left[X
(j1)
Y
(ik)
X
(jk)
Y
X\right]
i1!j1!ik!jk!

\\ {}+{}&

1
i1+j1+ … +ik+jk+ik+1+1
(i1)
\left[X
(j1)
Y
(ik)
X
(jk)
Y
(ik+1)
X
Y\right]
i1!j1!ik!jk!ik+1!

,ir,jr\ge0,ir+jr>0,1\ler\lek \end{align}.

Write this as

\begin{align} Z=

infty
\sum
k=0
(-1)k
k+1
\sum&
s\inSk,ik+1\ge0
1
i1+j1+ … +ik+jk+(ik=1)+(jk+1=0)
(i1)
\left[X
(j1)
Y
(ik)
X
(jk)
Y
(ik=1)
X
(jk+1=0)
Y
\right]
i1!j1!ik!jk!(ik+1=1)!(jk+1=0)!

\\ {}+{}&

1
i1+j1+ … +ik+jk+ik+1+(jk+1=1)
(i1)
\left[X
(j1)
Y
(ik)
X
(jk)
Y
(ik+1)
X
(jk+1=1)
Y
\right]
i1!j1!ik!jk!ik+1!(jk+1=1)!

,\\\\ &(ir,jr\ge0,ir+jr>0,1\ler\lek). \end{align}

This amounts towhere

ir,jr\ge0,ir+jr>0,1\ler\lek+1,

using the simple observation that for all . That is, in, the leading term vanishes unless equals or, corresponding to the first and second terms in the equation before it. In case, must equal, else the term vanishes for the same reason (is not allowed). Finally, shift the index,,This is Dynkin's formula. The striking similarity with (99) is not accidental: It reflects the Dynkin - Specht - Wever map, underpinning the original, different, derivation of the formula. Namely, if
i1
X
j1
Y

ik
X
jk
Y
is expressible as a bracket series, then necessarily[15] Putting observation and theorem together yields a concise proof of the explicit BCH formula.

See also

References

Notes and References

  1. Appendix on analytic functions.
  2. Theorem 5 Section 1.2
  3. Proposition 3.35
  4. A proof of the identity can be found in here. The relationship is simply that between a representation of a Lie group and that of its Lie algebra according to the Lie correspondence, since both and are representations with .
  5. See also from which Hall's proof is taken.
  6. This is equation (1.11).
  7. It holds that

    \tau(logz)\phi(-logz)=1

    for |z − 1| < 1 where

    \tau(w)=

    w
    1-e-w

    .

    Here, is the exponential generating function of

    (-1)kbk,

    where are the Bernoulli numbers.
  8. This is seen by choosing a basis for the underlying vector space such that is triangular, the eigenvalues being the diagonal elements. Then is triangular with diagonal elements . It follows that the eigenvalues of are . See, Lemma 6 in section 1.2.
  9. Proposition 7, section 1.2.
  10. Matrices whose eigenvalues satisfy are, under the exponential, in bijection with matrices whose eigenvalues are not on the negative real line or zero. The and are related by the complex exponential. See Remark 2c section 1.2.
  11. Corollary 3.44.
  12. Section 1.6.
  13. Section 5.5.
  14. Section 1.2.
  15. Chapter 1.12.2.