Algebra representation explained

Algebra representation should not be confused with algebraic representation.

In abstract algebra, a representation of an associative algebra is a module for that algebra. Here an associative algebra is a (not necessarily unital) ring. If the algebra is not unital, it may be made so in a standard way (see the adjoint functors page); there is no essential difference between modules for the resulting unital ring, in which the identity acts by the identity mapping, and representations of the algebra.

Examples

Linear complex structure

See main article: Linear complex structure. One of the simplest non-trivial examples is a linear complex structure, which is a representation of the complex numbers C, thought of as an associative algebra over the real numbers R. This algebra is realized concretely as

C=R[x]/(x2+1),

which corresponds to . Then a representation of C is a real vector space V, together with an action of C on V (a map

C\toEnd(V)

). Concretely, this is just an action of  , as this generates the algebra, and the operator representing (the image of in End(V)) is denoted J to avoid confusion with the identity matrix I.

Polynomial algebras

Another important basic class of examples are representations of polynomial algebras, the free commutative algebras – these form a central object of study in commutative algebra and its geometric counterpart, algebraic geometry. A representation of a polynomial algebra in variables over the field K is concretely a K-vector space with commuting operators, and is often denoted

K[T1,...,Tk],

meaning the representation of the abstract algebra

K[x1,...,xk]

where

xi\mapstoTi.

A basic result about such representations is that, over an algebraically closed field, the representing matrices are simultaneously triangularisable.

Even the case of representations of the polynomial algebra in a single variable are of interest – this is denoted by

K[T]

and is used in understanding the structure of a single linear operator on a finite-dimensional vector space. Specifically, applying the structure theorem for finitely generated modules over a principal ideal domain to this algebra yields as corollaries the various canonical forms of matrices, such as Jordan canonical form.

In some approaches to noncommutative geometry, the free noncommutative algebra (polynomials in non-commuting variables) plays a similar role, but the analysis is much more difficult.

Weights

See main article: Weight (representation theory). Eigenvalues and eigenvectors can be generalized to algebra representations.

The generalization of an eigenvalue of an algebra representation is, rather than a single scalar, a one-dimensional representation

λ\colonA\toR

(i.e., an algebra homomorphism from the algebra to its underlying ring: a linear functional that is also multiplicative).[1] This is known as a weight, and the analog of an eigenvector and eigenspace are called weight vector and weight space.

The case of the eigenvalue of a single operator corresponds to the algebra

R[T],

and a map of algebras

R[T]\toR

is determined by which scalar it maps the generator T to. A weight vector for an algebra representation is a vector such that any element of the algebra maps this vector to a multiple of itself – a one-dimensional submodule (subrepresentation). As the pairing

A x M\toM

is bilinear, "which multiple" is an A-linear functional of A (an algebra map AR), namely the weight. In symbols, a weight vector is a vector

m\inM

such that

am=λ(a)m

for all elements

a\inA,

for some linear functional

λ

– note that on the left, multiplication is the algebra action, while on the right, multiplication is scalar multiplication.

Because a weight is a map to a commutative ring, the map factors through the abelianization of the algebra

l{A}

– equivalently, it vanishes on the derived algebra – in terms of matrices, if

v

is a common eigenvector of operators

T

and

U

, then

TUv=UTv

(because in both cases it is just multiplication by scalars), so common eigenvectors of an algebra must be in the set on which the algebra acts commutatively (which is annihilated by the derived algebra). Thus of central interest are the free commutative algebras, namely the polynomial algebras. In this particularly simple and important case of the polynomial algebra

F[T1,...,Tk]

in a set of commuting matrices, a weight vector of this algebra is a simultaneous eigenvector of the matrices, while a weight of this algebra is simply a

k

-tuple of scalars

λ=(λ1,...,λk)

corresponding to the eigenvalue of each matrix, and hence geometrically to a point in

k

-space. These weights – in particularly their geometry – are of central importance in understanding the representation theory of Lie algebras, specifically the finite-dimensional representations of semisimple Lie algebras.

As an application of this geometry, given an algebra that is a quotient of a polynomial algebra on

k

generators, it corresponds geometrically to an algebraic variety in

k

-dimensional space, and the weight must fall on the variety – i.e., it satisfies the defining equations for the variety. This generalizes the fact that eigenvalues satisfy the characteristic polynomial of a matrix in one variable.

See also

References

Notes and References

  1. Note that for a field, the endomorphism algebra of a one-dimensional vector space (a line) is canonically equal to the underlying field: End(L) = K, since all endomorphisms are scalar multiplication; there is thus no loss in restricting to concrete maps to the base field, rather than to abstract representations. For rings there are also maps to quotient rings, which need not factor through maps to the ring itself, but again abstract modules are not needed.