Jordan–Chevalley decomposition explained

In mathematics, specifically linear algebra, the Jordan–Chevalley decomposition, named after Camille Jordan and Claude Chevalley, expresses a linear operator in a unique way as the sum of two other linear operators which are simpler to understand. Specifically, one part is potentially diagonalisable and the other is nilpotent. The two parts are polynomials in the operator, which makes them behave nicely in algebraic manipulations.

The decomposition has a short description when the Jordan normal form of the operator is given, but it exists under weaker hypotheses than are needed for the existence of a Jordan normal form. Hence the Jordan–Chevalley decomposition can be seen as a generalisation of the Jordan normal form, which is also reflected in several proofs of it.

It is closely related to the Wedderburn principal theorem about associative algebras, which also leads to several analogues in Lie algebras. Analogues of the Jordan–Chevalley decomposition also exist for elements of Linear algebraic groups and Lie groups via a multiplicative reformulation. The decomposition is an important tool in the study of all of these objects, and was developed for this purpose.

In many texts, the potentially diagonalisable part is also characterised as the semisimple part.

Introduction

A basic question in linear algebra is whether an operator on a finite-dimensional vector space can be diagonalised. For example, this is closely related to the eigenvalues of the operator. In several contexts, one may be dealing with many operators which are not diagonalisable. Even over an algebraically closed field, a diagonalisation may not exist. In this context, the Jordan normal form achieves the best possible result akin to a diagonalisation. For linear operators over a field which is not algebraically closed, there may be no eigenvector at all. This latter point is not the main concern dealt with by the Jordan–Chevalley decomposition. To avoid this problem, instead potentially diagonalisable operators are considered, which are those that admit a diagonalisation over some field (or equivalently over the algebraic closure of the field under consideration).

The operators which are "the furthest away" from being diagonalisable are nilpotent operators. An operator (or more generally an element of a ring)

x

is said to be nilpotent when there is some positive integer

m\geq1

such that

xm=0

. In several contexts in abstract algebra, it is the case that the presence of nilpotent elements of a ring make them much more complicated to work with. To some extent, this is also the case for linear operators. The Jordan–Chevalley decomposition "separates out" the nilpotent part of an operator which causes it to be not potentially diagonalisable. So when it exists, the complications introduced by nilpotent operators and their interaction with other operators can be understood using the Jordan–Chevalley decomposition.

Historically, the Jordan–Chevalley decomposition was motivated by the applications to the theory of Lie algebras and linear algebraic groups, as described in sections below.

Decomposition of a linear operator

Let

K

be a field,

V

a finite-dimensional vector space over

K

, and

T

a linear operator over

V

(equivalently, a matrix with entries from

K

). If the minimal polynomial of

T

splits over

K

(for example if

K

is algebraically closed), then

T

has a Jordan normal form

T=SJS-1

. If

D

is the diagonal of

J

, let

R=J-D

be the remaining part. Then

T=SDS-1+SRS-1

is a decomposition where

SDS-1

is diagonalisable and

SRS-1

is nilpotent. This restatement of the normal form as an additive decomposition not only makes the numerical computation more stable, but can be generalised to cases where the minimal polynomial of

T

does not split.

If the minimal polynomial of

T

splits into distinct linear factors, then

T

is diagonalisable. Therefore, if the minimal polynomial of

T

is at least separable, then

T

is potentially diagonalisable. The Jordan–Chevalley decomposition is concerned with the more general case where the minimal polynomial of

T

is a product of separable polynomials.

Let

x:V\toV

be any linear operator on the finite-dimensional vector space

V

over the field

K

. A Jordan–Chevalley decomposition of

x

is an expression of it as a sum

x=xs+xn

,where

xs

is potentially diagonalisable,

xn

is nilpotent, and

xsxn=xnxs

.

Several proofs are discussed in . Two arguments are also described below.

If

K

is a perfect field, then every polynomial is a product of separable polynomials (since every polynomial is a product of its irreducible factors, and these are separable over a perfect field). So in this case, the Jordan–Chevalley decomposition always exists. Moreover, over a perfect field, a polynomial is separable if and only if it is square-free. Therefore an operator is potentially diagonalisable if and only if its minimal polynomial is square-free. In general (over any field), the minimal polynomial of a linear operator is square-free if and only if the operator is semisimple.[1] (In particular, the sum of two commuting semisimple operators is always semisimple over a perfect field. The same statement is not true over general fields.) The property of being semisimple is more relevant than being potentially diagonalisable in most contexts where the Jordan–Chevalley decomposition is applied, such as for Lie algebras. For these reasons, many texts restrict to the case of perfect fields.

Proof of uniqueness and necessity

That

xs

and

xn

are polynomials in

x

implies in particular that they commute with any operator that commutes with

x

. This observation underlies the uniqueness proof.

Let

x=xs+xn

be a Jordan–Chevalley decomposition in which

xs

and (hence also)

xn

are polynomials in

x

. Let

x=xs'+xn'

be any Jordan–Chevalley decomposition. Then

xs-xs'=xn'-xn

, and

xs',xn'

both commute with

x

, hence with

xs,xn

since these are polynomials in

x

. The sum of commuting nilpotent operators is again nilpotent, and the sum of commuting potentially diagonalisable operators again potentially diagonalisable (because they are simultaneously diagonalizable over the algebraic closure of

K

). Since the only operator which is both potentially diagonalisable and nilpotent is the zero operator it follows that

xs-xs'=0=xn-xn'

.

To show that the condition that

x

have a minimal polynomial which is a product of separable polynomials is necessary, suppose that

x=xs+xn

is some Jordan–Chevalley decomposition. Letting

p

be the separable minimal polynomial of

xs

, one can check using the binomial theorem that

p(xs+xn)

can be written as

xny

where

y

is some polynomial in

xs,xn

. Moreover, for some

\ell\geq1

,
\ell
x
n

=0

. Thus

p(x)\ell=

\ell
x
n

y\ell=0

and so the minimal polynomial of

x

must divide

p\ell

. As

p\ell

is a product of separable polynomials (namely of copies of

p

), so is the minimal polynomial.

Concrete example for non-existence

If the ground field is not perfect, then a Jordan–Chevalley decomposition may not exist, as it is possible that the minimal polynomial is not a product of separable polynomials. The simplest such example is the following. Let

p

be a prime number, let

k

be an imperfect field of characteristic

p,

(e. g.

k=Fp(t)

) and choose

a\ink

that is not a

p

th power. Let

V=k[X]/\left(Xp-a\right)2,

let

x=\overlineX

be the image in the quotient and let

T

be the

k

-linear operator given by multiplication by

x

in

V

. Note that the minimal polynomial is precisely

\left(Xp-a\right)2

, which is inseparable and a square. By the necessity of the condition for the Jordan–Chevalley decomposition (as shown in the last section), this operator does not have a Jordan–Chevalley decomposition. It can be instructive to see concretely why there is at least no decomposition into a square-free and a nilpotent part.

Note that

T

has as its invariant

k

-linear subspaces precisely the ideals of

V

viewed as a ring, which correspond to the ideals of

k[X]

containing

\left(Xp-a\right)2

. Since

Xp-a

is irreducible in

k[X],

ideals of

V

are

0,

V

and

J=\left(xp-a\right)V.

Suppose

T=S+N

for commuting

k

-linear operators

S

and

N

that are respectively semisimple (just over

k

, which is weaker than semisimplicity over an algebraic closure of

k

and also weaker than being potentially diagonalisable) and nilpotent. Since

S

and

N

commute, they each commute with

T=S+N

and hence each acts

k[x]

-linearly on

V

. Therefore

S

and

N

are each given by multiplication by respective members of

V

s=S(1)

and

n=N(1),

with

s+n=T(1)=x

. Since

N

is nilpotent,

n

is nilpotent in

V,

therefore

\overlinen=0

in

V/J,

for

V/J

is a field. Hence,

n\inJ,

therefore

n=\left(xp-a\right)h(x)

for some polynomial

h(X)\ink[X]

. Also, we see that

n2=0

. Since

k

is of characteristic

p,

we have

xp=sp+np=sp

. On the other hand, since

\overlinex=\overlines

in

A/J,

we have

h\left(\overlines\right)=h\left(\overlinex\right),

therefore

h(s)-h(x)\inJ

in

V.

Since

\left(xp-a\right)J=0,

we have

\left(xp-a\right)h(x)=\left(xp-a\right)h(s).

Combining these results we get

x=s+n=s+\left(sp-a\right)h(s).

This shows that

s

generates

V

as a

k

-algebra and thus the

S

-stable

k

-linear subspaces of

V

are ideals of

V,

i.e. they are

0,

J

and

V.

We see that

J

is an

S

-invariant subspace of

V

which has no complement

S

-invariant subspace, contrary to the assumption that

S

is semisimple. Thus, there is no decomposition of

T

as a sum of commuting

k

-linear operators that are respectively semisimple and nilpotent.

If instead of with the polynomial

\left(Xp-a\right)2

, the same construction is performed with

{Xp}-a

, the resulting operator

T

still does not admit a Jordan–Chevalley decomposition by the main theorem. However,

T

is semi-simple. The trivial decomposition

T=T+0

hence expresses

T

as a sum of a semisimple and a nilpotent operator, both of which are polynomials in

T

.

Elementary proof of existence

This construction is similar to Hensel's lemma in that it uses an algebraic analogue of Taylor's theorem to find an element with a certain algebraic property via a variant of Newton's method. In this form, it is taken from .

Let

x

have minimal polynomial

p

and assume this is a product of separable polynomials. This condition is equivalent to demanding that there is some separable

q

such that

q\midp

and

p\midqm

for some

m\geq1

. By the Bézout lemma, there are polynomials

u

and

v

such that

{uq+{vq'}}=1

. This can be used to define a recursion

xn+1=xn-v(xn)q(xn)

, starting with

x0=x

. Letting

ak{X}

be the algebra of operators which are polynomials in

x

, it can be checked by induction that for all

n

:

xn\inak{X}

because in each step, a polynomial is applied,

xn-x\inq(x)ak{X}

because

xn+1-x=(xn+1-xn)+(xn-x)

and both terms are in

q(x)ak{X}

by induction hypothesis,

q(xn)\in

2n
q(x)

ak{X}

because

q(xn+1)=q(xn)+q'(xn)(xn+1-xn)+(xn+1-

2
x
n)

h

for some

h\inak{X}

(by the algebraic version of Taylor's theorem). By definition of

xn+1

as well as of

u

and

v

, this simplifies to

q(xn+1)=

2
q(x
n)

(u(xn)+

2
v(x
n)

h)

, which indeed lies in
2n+1
q(x)

ak{X}

by induction hypothesis.

Thus, as soon as

2n\geqm

,

q(xn)=0

by the third point since

p\midqm

and

p(x)=0

, so the minimal polynomial of

xn

will divide

q

and hence be separable. Moreover,

xn

will be a polynomial in

x

by the first point and

xn-x

will be nilpotent by the second point (in fact,

(xn-x)m=0

). Therefore,

x=xn+(x-xn)

is then the Jordan–Chevalley decomposition of

x

. Q.E.D.

This proof, besides being completely elementary, has the advantage that it is algorithmic: By the Cayley–Hamilton theorem,

p

can be taken to be the characteristic polynomial of

x

, and in many contexts,

q

can be determined from

p

. Then

v

can be determined using the Euclidean algorithm. The iteration of applying the polynomial

vq

to the matrix then can be performed until either

v(xn)q(xn)=0

(because then all later values will be equal) or

2n

exceeds the dimension of the vector space on which

x

is defined (where

n

is the number of iteration steps performed, as above).

Proof of existence via Galois theory

This proof, or variants of it, is commonly used to establish the Jordan–Chevalley decomposition. It has the advantage that it is very direct and describes quite precisely how close one can get to a Jordan–Chevalley decomposition: If

L

is the splitting field of the minimal polynomial of

x

and

G

is the group of automorphisms of

L

that fix the base field

K

, then the set

F

of elements of

L

that are fixed by all elements of

G

is a field with inclusions

K\subseteqF\subseteqL

(see Galois correspondence). Below it is argued that

x

admits a Jordan–Chevalley decomposition over

F

, but not any smaller field. This argument does not use Galois theory. However, Galois theory is required deduce from this the condition for the existence of the Jordan-Chevalley given above.

Above it was observed that if

x

has a Jordan normal form (i. e. if the minimal polynomial of

x

splits), then it has a Jordan Chevalley decomposition. In this case, one can also see directly that

xn

(and hence also

xs

) is a polynomial in

x

. Indeed, it suffices to check this for the decomposition of the Jordan matrix

J=D+R

. This is a technical argument, but does not require any tricks beyond the Chinese remainder theorem.

In the Jordan normal form, we have written

V=

r
oplus
i=1

Vi

where

r

is the number of Jordan blocks and

x

|
Vi

is one Jordan block. Now let

f(t)=\operatorname{det}(tI-x)

be the characteristic polynomial of

x

. Because

f

splits, it can be written as

f(t)=

r
\prod
i=1

(t-

di
λ
i)

, where

r

is the number of Jordan blocks,

λi

are the distinct eigenvalues, and

di

are the sizes of the Jordan blocks, so

di=\dimVi

. Now, the Chinese remainder theorem applied to the polynomial ring

k[t]

gives a polynomial

p(t)

satisfying the conditions

p(t)\equiv0\bmodt,p(t)\equivλi\bmod(t-

di
λ
i)
(for all i).(There is a redundancy in the conditions if some

λi

is zero but that is not an issue; just remove it from the conditions.) The condition

p(t)\equivλi\bmod(t-

di
λ
i)
, when spelled out, means that

p(t)-λi=gi(t)(t-

di
λ
i)
for some polynomial

gi(t)

. Since

(x-λi

di
I)
is the zero map on

Vi

,

p(x)

and

xs

agree on each

Vi

; i.e.,

p(x)=xs

. Also then

q(x)=xn

with

q(t)=t-p(t)

. The condition

p(t)\equiv0\bmodt

ensures that

p(t)

and

q(t)

have no constant terms. This completes the proof of the theorem in case the minimal polynomial of

x

splits.

This fact can be used to deduce the Jordan–Chevalley decomposition in the general case. Let

L

be the splitting field of the minimal polynomial of

x

, so that

x

does admit a Jordan normal form over

L

. Then, by the argument just given,

x

has a Jordan–Chevalley decomposition

x={c(x)}+{(x-{c(x)})}

where

c

is a polynomial with coefficients from

L

,

c(x)

is diagonalisable (over

L

) and

x-c(x)

is nilpotent.

Let

\sigma

be a field automorphism of

L

which fixes

K

. Then c(x) + (x-) = x = = + Here

\sigma(c(x))=\sigma(c)(x)

is a polynomial in

x

, so is

x-c(x)

. Thus,

\sigma(c(x))

and

\sigma(x-c(x))

commute. Also,

\sigma(c(x))

is potentially diagonalisable and

\sigma({x-c(x)})

is nilpotent. Thus, by the uniqueness of the Jordan–Chevalley decomposition (over

L

),

\sigma(c(x))=c(x)

and

\sigma(c(x))=c(x)

. Therefore, by definition,

xs,xn

are endomorphisms (represented by matrices) over

F

. Finally, since

\left\{1,x,x2,...\right\}

contains an

L

-basis that spans the space containing

xs,xn

, by the same argument, we also see that

c

has coefficients in

F

. Q.E.D.

If the minimal polynomial of

x

is a product of separable polynomials, then the field extension

L/K

is Galois, meaning that

F=K

.

Relations to the theory of algebras

Separable algebras

The Jordan–Chevalley decomposition is very closely related to the Wedderburn principal theorem in the following formulation:[2]

Usually, the term „separable“ in this theorem refers to the general concept of a separable algebra and the theorem might then be established as a corollary of a more general high-powered result.[3] However, if it is instead interpreted in the more basic sense that every element have a separable minimal polynomial, then this statement is essentially equivalent to the Jordan–Chevalley decomposition as described above. This gives a different way to view the decomposition, and for instance takes this route for establishing it.

To see how the Jordan–Chevalley decomposition follows from the Wedderburn principal theorem, let

V

be a finite-dimensional vector space over the field

K

,

x:V\toV

an endomorphism with a minimal polynomial which is a product of separable polynomials and

A=K[x]\subset\operatorname{End}(V)

the subalgebra generated by

x

. Note that

A

is a commutative Artinian ring, so

J

is also the nilradical of

A

. Moreover,

A/J

is separable, because if

a\inA

, then for minimal polynomial

p

, there is a separable polynomial

q

such that

q\midp

and

p\midqm

for some

m\geq1

. Therefore

q(a)\inJ

, so the minimal polynomial of the image

a+J\inA/J

divides

q

, meaning that it must be separable as well (since a divisor of a separable polynomial is separable). There is then the vector-space decomposition

A=BJ

with

B

separable. In particular, the endomorphism

x

can be written as

x=xs+xn

where

xs\inB

and

xn\inJ

. Moreover, both elements are, like any element of

A

, polynomials in

x

.

Conversely, the Wedderburn principal theorem in the formulation above is a consequence of the Jordan–Chevalley decomposition. If

A

has a separable subalgebra

B

such that

A=BJ

, then

A/J\congB

is separable. Conversely, if

A/J

is separable, then any element of

A

is a sum of a separable and a nilpotent element. As shown above in
  1. Proof of uniqueness and necessity
, this implies that the minimal polynomial will be a product of separable polynomials. Let

x\inA

be arbitrary, define the operator

Tx:A\toA,a\mapstoax

, and note that this has the same minimal polynomial as

x

. So it admits a Jordan–Chevalley decomposition, where both operators are polynomials in

Tx

, hence of the form

Ts,Tn

for some

s,n\inA

which have separable and nilpotent minimal polynomials, respectively. Moreover, this decomposition is unique. Thus if

B

is the subalgebra of all separable elements (that this is a subalgebra can be seen by recalling that

s

is separable if and only if

Ts

is potentially diagonalisable),

A=BJ

(because

J

is the ideal of nilpotent elements). The algebra

B\congA/J

is separable and semisimple by assumption.

Over perfect fields, this result simplifies. Indeed,

A/J

is then always separable in the sense of minimal polynomials: If

a\inA

, then the minimal polynomial

p

is a product of separable polynomials, so there is a separable polynomial

q

such that

q\midp

and

p\midqm

for some

m\geq1

. Thus

q(a)\inJ

. So in

A/J

, the minimal polynomial of

a+J

divides

q

and is hence separable. The crucial point in the theorem is then not that

A/J

is separable (because that condition is vacuous), but that it is semisimple, meaning its radical is trivial.

The same statement is true for Lie algebras, but only in characteristic zero. This is the content of Levi’s theorem. (Note that the notions of semisimple in both results do indeed correspond, because in both cases this is equivalent to being the sum of simple subalgebras or having trivial radical, at least in the finite-dimensional case.)

Preservation under representations

The crucial point in the proof for the Wedderburn principal theorem above is that an element

x\inA

corresponds to a linear operator

Tx:A\toA

with the same properties. In the theory of Lie algebras, this corresponds to the adjoint representation of a Lie algebra

ak{g}

. This decomposed operator has a Jordan–Chevalley decomposition

\operatorname{ad}(x)=\operatorname{ad}(x)s+\operatorname{ad}(x)n

. Just as in the associative case, this corresponds to a decomposition of

x

, but polynomials are not available as a tool. One context in which this does makes sense is the restricted case where

ak{g}

is contained in the Lie algebra

ak{gl}(V)

of the endomorphisms of a finite-dimensional vector space

V

over the perfect field

K

. Indeed, any semisimple Lie algebra can be realised in this way.

If

x=xs+xn

is the Jordan decomposition, then

\operatorname{ad}(x)=\operatorname{ad}(xs)+\operatorname{ad}(xn)

is the Jordan decomposition of the adjoint endomorphism

\operatorname{ad}(x)

on the vector space

ak{g}

. Indeed, first,

\operatorname{ad}(xs)

and

\operatorname{ad}(xn)

commute since

[\operatorname{ad}(xs),\operatorname{ad}(xn)]=\operatorname{ad}([xs,xn])=0

. Second, in general, for each endomorphism

y\inak{g}

, we have:
  1. If

ym=0

, then

\operatorname{ad}(y)2m-1=0

, since

\operatorname{ad}(y)

is the difference of the left and right multiplications by y.
  1. If

y

is semisimple, then

\operatorname{ad}(y)

is semisimple, since semisimple is equivalent to potentially diagonalisable over a perfect field (if

y

is diagonal over the basis

\{b1,...,bn\}

, then

\operatorname{ad}(y)

is diagonal over the basis consisting of the maps

Mij

with

bi\mapstobj

and

bk\mapsto0

for

k0

).[4]

Hence, by uniqueness,

\operatorname{ad}(x)s=\operatorname{ad}(xs)

and

\operatorname{ad}(x)n=\operatorname{ad}(xn)

.

The adjoint representation is a very natural and general representation of any Lie algebra. The argument above illustrates (and indeed proves) a general principle which generalises this: If

\pi:ak{g}\toak{gl}(V)

is any finite-dimensional representation of a semisimple finite-dimensional Lie algebra over a perfect field, then

\pi

preserves the Jordan decomposition in the following sense: if

x=xs+xn

, then

\pi(xs)=\pi(x)s

and

\pi(xn)=\pi(x)n

.[5]

Nilpotency criterion

The Jordan decomposition can be used to characterize nilpotency of an endomorphism. Let k be an algebraically closed field of characteristic zero,

E=\operatorname{End}Q(k)

the endomorphism ring of k over rational numbers and V a finite-dimensional vector space over k. Given an endomorphism

x:V\toV

, let

x=s+n

be the Jordan decomposition. Then

s

is diagonalizable; i.e., V = \bigoplus V_i where each

Vi

is the eigenspace for eigenvalue

λi

with multiplicity

mi

. Then for any

\varphi\inE

let

\varphi(s):V\toV

be the endomorphism such that

\varphi(s):Vi\toVi

is the multiplication by

\varphi(λi)

. Chevalley calls

\varphi(s)

the replica of

s

given by

\varphi

. (For example, if

k=C

, then the complex conjugate of an endomorphism is an example of a replica.) Now,

Proof: First, since

n\varphi(s)

is nilpotent,

0=\operatorname{tr}(x\varphi(s))=\sumi\operatorname{tr}\left(s\varphi(s)|Vi\right)=\sumimiλi\varphi(λi)

.

If

\varphi

is the complex conjugation, this implies

λi=0

for every i. Otherwise, take

\varphi

to be a

Q

-linear functional

\varphi:k\toQ

followed by

Q\hookrightarrowk

. Applying that to the above equation, one gets:

\sumimi

2
\varphi(λ
i)

=0

and, since

\varphi(λi)

are all real numbers,

\varphi(λi)=0

for every i. Varying the linear functionals then implies

λi=0

for every i.

\square

A typical application of the above criterion is the proof of Cartan's criterion for solvability of a Lie algebra. It says: if

ak{g}\subsetak{gl}(V)

is a Lie subalgebra over a field k of characteristic zero such that

\operatorname{tr}(xy)=0

for each

x\inak{g},y\inDak{g}=[ak{g},ak{g}]

, then

ak{g}

is solvable.

Proof: Without loss of generality, assume k is algebraically closed. By Lie's theorem and Engel's theorem, it suffices to show for each

x\inDakg

,

x

is a nilpotent endomorphism of V. Write x = \sum_i [x_i, y_i]. Then we need to show:

\operatorname{tr}(x\varphi(s))=\sumi\operatorname{tr}([xi,yi]\varphi(s))=\sumi\operatorname{tr}(xi[yi,\varphi(s)])

is zero. Let

ak{g}'=ak{gl}(V)

. Note we have:

\operatorname{ad}ak{g'}(x):ak{g}\toDak{g}

and, since

\operatorname{ad}ak{g'}(s)

is the semisimple part of the Jordan decomposition of

\operatorname{ad}ak{g'}(x)

, it follows that

\operatorname{ad}ak{g'}(s)

is a polynomial without constant term in

\operatorname{ad}ak{g'}(x)

; hence,

\operatorname{ad}ak{g'}(s):ak{g}\toDak{g}

and the same is true with

\varphi(s)

in place of

s

. That is,

[\varphi(s),ak{g}]\subsetDak{g}

, which implies the claim given the assumption.

\square

Real semisimple Lie algebras

In the formulation of Chevalley and Mostow, the additive decomposition states that an element X in a real semisimple Lie algebra g with Iwasawa decomposition g = kan can be written as the sum of three commuting elements of the Lie algebra X = S + D + N, with S, D and N conjugate to elements in k, a and n respectively. In general the terms in the Iwasawa decomposition do not commute.

Multiplicative decomposition

If

x

is an invertible linear operator, it may be more convenient to use a multiplicative Jordan–Chevalley decomposition. This expresses

x

as a product

x=xsxu

,where

xs

is potentially diagonalisable, and

xu-1

is nilpotent (one also says that

xu

is unipotent).

The multiplicative version of the decomposition follows from the additive one since, as

xs

is invertible (because the sum of an invertible operator and a nilpotent operator is invertible)

x=xs+xn=xs\left(1+

-1
x
s

xn\right)

and

1+

-1
x
s

xn

is unipotent. (Conversely, by the same type of argument, one can deduce the additive version from the multiplicative one.)

The multiplicative version is closely related to decompositions encountered in a linear algebraic group. For this it is again useful to assume that the underlying field

K

is perfect because then the Jordan–Chevalley decomposition exists for all matrices.

Linear algebraic groups

Let

G

be a linear algebraic group over a perfect field. Then, essentially by definition, there is a closed embedding

G\hookrightarrowGLn

. Now, to each element

g\inG

, by the multiplicative Jordan decomposition, there are a pair of a semisimple element

gs

and a unipotent element

gu

a priori in

GLn

such that

g=gsgu=gugs

. But, as it turns out, the elements

gs,gu

can be shown to be in

G

(i.e., they satisfy the defining equations of G) and that they are independent of the embedding into

GLn

; i.e., the decomposition is intrinsic.

When G is abelian,

G

is then the direct product of the closed subgroup of the semisimple elements in G and that of unipotent elements.

Real semisimple Lie groups

The multiplicative decomposition states that if g is an element of the corresponding connected semisimple Lie group G with corresponding Iwasawa decomposition G = KAN, then g can be written as the product of three commuting elements g = sdu with s, d and u conjugate to elements of K, A and N respectively. In general the terms in the Iwasawa decomposition g = kan do not commute.

References

Notes and References

  1. Web site: Conrad . Keith . January 9, 2024 . Semisimplicity . Expository papers.
  2. Book: Ring Theory . 18 April 1972 . Academic Press . 9780080873572.
  3. Book: Cohn, Paul M. . Further Algebra and Applications . Springer London . 2002 . 978-1-85233-667-7.
  4. This is not easy to see in general but is shown in the proof of . Editorial note: we need to add a discussion of this matter to "semisimple operator".
  5. Web site: Weber . Brian . 2 October 2012 . Lecture 8 - Preservation of the Jordan Decomposition and Levi's Theorem . 9 January 2024 . Course Notes.