Lie's theorem explained

In mathematics, specifically the theory of Lie algebras, Lie's theorem states that, over an algebraically closed field of characteristic zero, if

\pi:ak{g}\toak{gl}(V)

is a finite-dimensional representation of a solvable Lie algebra, then there's a flag

V=V0\supsetV1\supset\supsetVn=0

of invariant subspaces of

\pi(ak{g})

with

\operatorname{codim}Vi=i

, meaning that

\pi(X)(Vi)\subseteqVi

for each

X\inak{g}

and i.

Put in another way, the theorem says there is a basis for V such that all linear transformations in

\pi(ak{g})

are represented by upper triangular matrices. This is a generalization of the result of Frobenius that commuting matrices are simultaneously upper triangularizable, as commuting matrices generate an abelian Lie algebra, which is a fortiori solvable.

A consequence of Lie's theorem is that any finite dimensional solvable Lie algebra over a field of characteristic 0 has a nilpotent derived algebra (see

  1. Consequences
). Also, to each flag in a finite-dimensional vector space V, there correspond a Borel subalgebra (that consist of linear transformations stabilizing the flag); thus, the theorem says that

\pi(ak{g})

is contained in some Borel subalgebra of

ak{gl}(V)

.

Counter-example

For algebraically closed fields of characteristic p>0 Lie's theorem holds provided the dimension of the representation is less than p (see the proof below), but can fail for representations of dimension p. An example is given by the 3-dimensional nilpotent Lie algebra spanned by 1, x, and d/dx acting on the p-dimensional vector space k[''x'']/(xp), which has no eigenvectors. Taking the semidirect product of this 3-dimensional Lie algebra by the p-dimensional representation (considered as an abelian Lie algebra) gives a solvable Lie algebra whose derived algebra is not nilpotent.

Proof

The proof is by induction on the dimension of

ak{g}

and consists of several steps. (Note: the structure of the proof is very similar to that for Engel's theorem.) The basic case is trivial and we assume the dimension of

ak{g}

is positive. We also assume V is not zero. For simplicity, we write

Xv=\pi(X)(v)

.

Step 1: Observe that the theorem is equivalent to the statement:

\pi(ak{g})

.Indeed, the theorem says in particular that a nonzero vector spanning

Vn-1

is a common eigenvector for all the linear transformations in

\pi(ak{g})

. Conversely, if v is a common eigenvector, take

Vn-1

to its span and then

\pi(ak{g})

admits a common eigenvector in the quotient

V/Vn-1

; repeat the argument.

Step 2: Find an ideal

ak{h}

of codimension one in

ak{g}

.

Let

Dak{g}=[ak{g},ak{g}]

be the derived algebra. Since

ak{g}

is solvable and has positive dimension,

Dak{g}\neak{g}

and so the quotient

ak{g}/Dak{g}

is a nonzero abelian Lie algebra, which certainly contains an ideal of codimension one and by the ideal correspondence, it corresponds to an ideal of codimension one in

ak{g}

.

Step 3: There exists some linear functional

λ

in

ak{h}*

such that

Vλ=\{v\inV|Xv=λ(X)v,X\inak{h}\}

is nonzero. This follows from the inductive hypothesis (it is easy to check that the eigenvalues determine a linear functional).

Step 4:

Vλ

is a

ak{g}

-invariant subspace. (Note this step proves a general fact and does not involve solvability.)

Let

Y\inak{g}

,

v\inVλ

, then we need to prove

Yv\inVλ

. If

v=0

then it's obvious, so assume

v\ne0

and set recursively

v0=v,vi+1=Yvi

. Let

U=\operatorname{span}\{vi|i\ge0\}

and

\ell\inN0

be the largest such that

v0,\ldots,v\ell

are linearly independent. Then we'll prove that they generate U and thus

\alpha=(v0,\ldots,v\ell)

is a basis of U. Indeed, assume by contradiction that it's not the case and let

m\inN0

be the smallest such that

vm\notin\langlev0,\ldots,v\ell\rangle

, then obviously

m\ge\ell+1

. Since

v0,\ldots,v\ell+1

are linearly dependent,

v\ell+1

is a linear combination of

v0,\ldots,v\ell

. Applying the map

Ym-\ell-1

it follows that

vm

is a linear combination of

vm-\ell-1,\ldots,vm-1

. Since by the minimality of m each of these vectors is a linear combination of

v0,\ldots,v\ell

, so is

vm

, and we get the desired contradiction. We'll prove by induction that for every

n\inN0

and

X\inak{h}

there exist elements

a0,n,X,\ldots,an,n,X

of the base field such that

an,n,X(X)

and

Xvn=

n
\sum
i=0

ai,n,Xvi.

The

n=0

case is straightforward since

Xv0=λ(X)v0

. Now assume that we have proved the claim for some

n\inN0

and all elements of

ak{h}

and let

X\inak{h}

. Since

ak{h}

is an ideal, it's

[X,Y]\inak{h}

, and thus

Xvn+1=Y(Xvn)+[X,Y]vn=Y

n
\sum
i=0

ai,n,Xvi+

n
\sum
i=0

ai,n,[X,Y]vi=a0,n,[X,Y]v0+

n
\sum
i=1

(ai-1,n,X+ai,n,[X,Y])vi+λ(X)vn+1,

and the induction step follows. This implies that for every

X\inak{h}

the subspace U is an invariant subspace of X and the matrix of the restricted map

\pi(X)|U

in the basis

\alpha

is upper triangular with diagonal elements equal to

λ(X)

, hence

\operatorname{tr}(\pi(X)|U)=\dim(U)λ(X)

. Applying this with

[X,Y]\inak{h}

instead of X gives

\operatorname{tr}(\pi([X,Y])|U)=\dim(U)λ([X,Y])

. On the other hand, U is also obviously an invariant subspace of Y, and so

\operatorname{tr}(\pi([X,Y])|U)=\operatorname{tr}([\pi(X),\pi(Y)]|U])=\operatorname{tr}([\pi(X)|U,\pi(Y)|U])=0

since commutators have zero trace, and thus

\dim(U)λ([X,Y])=0

. Since

\dim(U)>0

is invertible (because of the assumption on the characteristic of the base field),

λ([X,Y])=0

and

X(Yv)=Y(Xv)+[X,Y]v=Y(λ(X)v)+λ([X,Y])v=λ(X)(Yv),

and so

Yv\inVλ

.

Step 5: Finish up the proof by finding a common eigenvector.

Write

ak{g}=ak{h}+L

where L is a one-dimensional vector subspace. Since the base field is algebraically closed, there exists an eigenvector in

Vλ

for some (thus every) nonzero element of L. Since that vector is also eigenvector for each element of

ak{h}

, the proof is complete.

\square

Consequences

\operatorname{ad}:ak{g}\toak{gl}(ak{g})

of a (finite-dimensional) solvable Lie algebra

ak{g}

over an algebraically closed field of characteristic zero; thus, one can choose a basis on

ak{g}

with respect to which

\operatorname{ad}(ak{g})

consists of upper triangular matrices. It follows easily that for each

x,y\inak{g}

,

\operatorname{ad}([x,y])=[\operatorname{ad}(x),\operatorname{ad}(y)]

has diagonal consisting of zeros; i.e.,

\operatorname{ad}([x,y])

is a strictly upper triangular matrix. This implies that

[akg,akg]

is a nilpotent Lie algebra. Moreover, if the base field is not algebraically closed then solvability and nilpotency of a Lie algebra is unaffected by extending the base field to its algebraic closure. Hence, one concludes the statement (the other implication is obvious):

A finite-dimensional Lie algebra

akg

over a field of characteristic zero is solvable if and only if the derived algebra

Dakg=[akg,akg]

is nilpotent.

Lie's theorem also establishes one direction in Cartan's criterion for solvability:

If V is a finite-dimensional vector space over a field of characteristic zero and

ak{g}\subseteqak{gl}(V)

a Lie subalgebra, then

ak{g}

is solvable if and only if

\operatorname{tr}(XY)=0

for every

X\inak{g}

and

Y\in[ak{g},ak{g}]

.

Indeed, as above, after extending the base field, the implication

is seen easily. (The converse is more difficult to prove.)

Lie's theorem (for various V) is equivalent to the statement:

For a solvable Lie algebra

akg

over an algebraically closed field of characteristic zero, each finite-dimensional simple

ak{g}

-module (i.e., irreducible as a representation) has dimension one.Indeed, Lie's theorem clearly implies this statement. Conversely, assume the statement is true. Given a finite-dimensional

akg

-module
V, let

V1

be a maximal

akg

-submodule (which exists by finiteness of the dimension). Then, by maximality,

V/V1

is simple; thus, is one-dimensional. The induction now finishes the proof.

The statement says in particular that a finite-dimensional simple module over an abelian Lie algebra is one-dimensional; this fact remains true over any base field since in this case every vector subspace is a Lie subalgebra.

Here is another quite useful application:

Let

ak{g}

be a finite-dimensional Lie algebra over an algebraically closed field of characteristic zero with radical

\operatorname{rad}(ak{g})

. Then each finite-dimensional simple representation

\pi:ak{g}\toak{gl}(V)

is the tensor product of a simple representation of

ak{g}/\operatorname{rad}(ak{g})

with a one-dimensional representation of

ak{g}

(i.e., a linear functional vanishing on Lie brackets).

By Lie's theorem, we can find a linear functional

λ

of

\operatorname{rad}(ak{g})

so that there is the weight space

Vλ

of

\operatorname{rad}(ak{g})

. By Step 4 of the proof of Lie's theorem,

Vλ

is also a

ak{g}

-module; so

V=Vλ

. In particular, for each

X\in\operatorname{rad}(ak{g})

,

\operatorname{tr}(\pi(X))=\dim(V)λ(X)

. Extend

λ

to a linear functional on

ak{g}

that vanishes on

[akg,akg]

;

λ

is then a one-dimensional representation of

ak{g}

. Now,

(\pi,V)\simeq(\pi,V)()λ

. Since

\pi

coincides with

λ

on

\operatorname{rad}(ak{g})

, we have that

V()

is trivial on

\operatorname{rad}(ak{g})

and thus is the restriction of a (simple) representation of

ak{g}/\operatorname{rad}(ak{g})

.

\square

See also

Sources