Invariant subspace explained

In mathematics, an invariant subspace of a linear mapping T : VV i.e. from some vector space V to itself, is a subspace W of V that is preserved by T. More generally, an invariant subspace for a collection of linear mappings is a subspace preserved by each mapping individually.

For a single operator

Consider a vector space

V

and a linear map

T:V\toV.

A subspace

W\subseteqV

is called an invariant subspace for

T

, or equivalently, -invariant, if transforms any vector

v\inW

back into . In formulas, this can be written\mathbf \in W \implies T(\mathbf) \in Wor TW\subseteq W\text

In this case, restricts to an endomorphism of :T|_W : W \to W\text\quad T|_W(\mathbf) = T(\mathbf)\text

The existence of an invariant subspace also has a matrix formulation. Pick a basis C for W and complete it to a basis B of V. With respect to, the operator has form T = \begin T|_W & T_ \\ 0 & T_ \end for some and .

Examples

Any linear map

T:V\toV

admits the following invariant subspaces:

V

, because

T

maps every vector in

V

into

V.

\{0\}

, because

T(0)=0

.These are the improper and trivial invariant subspaces, respectively. Certain linear operators have no proper non-trivial invariant subspace: for instance, rotation of a two-dimensional real vector space. However, the axis of a rotation in three dimensions is always an invariant subspace.

1-dimensional subspaces

If is a 1-dimensional invariant subspace for operator with vector, then the vectors and must be linearly dependent. Thus \forall\mathbf\in U\;\exists\alpha\in\mathbb: T\mathbf=\alpha\mathbf\textIn fact, the scalar does not depend on .

The equation above formulates an eigenvalue problem. Any eigenvector for spans a 1-dimensional invariant subspace, and vice-versa. In particular, a nonzero invariant vector (i.e. a fixed point of T) spans an invariant subspace of dimension 1.

As a consequence of the fundamental theorem of algebra, every linear operator on a nonzero finite-dimensional complex vector space has an eigenvector. Therefore, every such linear operator in at least two dimensions has a proper non-trivial invariant subspace.

Diagonalization via projections

Determining whether a given subspace W is invariant under T is ostensibly a problem of geometric nature. Matrix representation allows one to phrase this problem algebraically.

Write as the direct sum ; a suitable can always be chosen by extending a basis of . The associated projection operator P onto W has matrix representation

P=\begin{bmatrix}1&0\ 0&0\end{bmatrix}:\begin{matrix}W\  ⊕ \W'\end{matrix}\begin{matrix}W\  ⊕ \W'\end{matrix}.

A straightforward calculation shows that W is -invariant if and only if PTP = TP.

If 1 is the identity operator, then is projection onto . The equation holds if and only if both im(P) and im(1 − P) are invariant under T. In that case, T has matrix representation T = \begin T_ & 0 \\ 0 & T_ \end : \begin \operatorname(P) \\ \oplus \\ \operatorname(1-P) \end \rightarrow \begin \operatorname(P) \\ \oplus \\ \operatorname(1-P) \end \;.

Colloquially, a projection that commutes with T "diagonalizes" T.

Lattice of subspaces

As the above examples indicate, the invariant subspaces of a given linear transformation T shed light on the structure of T. When V is a finite-dimensional vector space over an algebraically closed field, linear transformations acting on V are characterized (up to similarity) by the Jordan canonical form, which decomposes V into invariant subspaces of T. Many fundamental questions regarding T can be translated to questions about invariant subspaces of T.

The set of -invariant subspaces of is sometimes called the invariant-subspace lattice of and written . As the name suggests, it is a (modular) lattice, with meets and joins given by (respectively) set intersection and linear span. A minimal element in in said to be a minimal invariant subspace.

In the study of infinite-dimensional operators, is sometimes restricted to only the closed invariant subspaces.

For multiple operators

Given a collection of operators, a subspace is called -invariant if it is invariant under each .

As in the single-operator case, the invariant-subspace lattice of, written, is the set of all -invariant subspaces, and bears the same meet and join operations. Set-theoretically, it is the intersection \mathrm(\mathcal)=\bigcap_\text

Examples

Let be the set of all linear operators on . Then .

Given a representation of a group G on a vector space V, we have a linear transformation T(g) : VV for every element g of G. If a subspace W of V is invariant with respect to all these transformations, then it is a subrepresentation and the group G acts on W in a natural way. The same construction applies to representations of an algebra.

As another example, let and be the algebra generated by, where 1 is the identity operator. Then Lat(T) = Lat(Σ).

Fundamental theorem of noncommutative algebra

Just as the fundamental theorem of algebra ensures that every linear transformation acting on a finite-dimensional complex vector space has a non-trivial invariant subspace, the fundamental theorem of noncommutative algebra asserts that Lat(Σ) contains non-trivial elements for certain Σ.One consequence is that every commuting family in L(V) can be simultaneously upper-triangularized. To see this, note that an upper-triangular matrix representation corresponds to a flag of invariant subspaces, that a commuting family generates a commuting algebra, and that is not commutative when .

Left ideals

If A is an algebra, one can define a left regular representation Φ on A: Φ(a)b = ab is a homomorphism from A to L(A), the algebra of linear transformations on A

The invariant subspaces of Φ are precisely the left ideals of A. A left ideal M of A gives a subrepresentation of A on M.

If M is a left ideal of A then the left regular representation Φ on M now descends to a representation Φ' on the quotient vector space A/M. If [''b''] denotes an equivalence class in A/M, Φ'(a)[''b''] = [''ab'']. The kernel of the representation Φ' is the set .

The representation Φ' is irreducible if and only if M is a maximal left ideal, since a subspace VA/M is an invariant under if and only if its preimage under the quotient map, V + M, is a left ideal in A.

Invariant subspace problem

See main article: Invariant subspace problem.

The invariant subspace problem concerns the case where V is a separable Hilbert space over the complex numbers, of dimension > 1, and T is a bounded operator. The problem is to decide whether every such T has a non-trivial, closed, invariant subspace. It is unsolved.

In the more general case where V is assumed to be a Banach space, Per Enflo (1976) found an example of an operator without an invariant subspace. A concrete example of an operator without an invariant subspace was produced in 1985 by Charles Read.

Almost-invariant halfspaces

Related to invariant subspaces are so-called almost-invariant-halfspaces (AIHS's). A closed subspace

Y

of a Banach space

X

is said to be almost-invariant under an operator

T\inl{B}(X)

if

TY\subseteqY+E

for some finite-dimensional subspace

E

; equivalently,

Y

is almost-invariant under

T

if there is a finite-rank operator

F\inl{B}(X)

such that

(T+F)Y\subseteqY

, i.e. if

Y

is invariant (in the usual sense) under

T+F

. In this case, the minimum possible dimension of

E

(or rank of

F

) is called the defect.

Clearly, every finite-dimensional and finite-codimensional subspace is almost-invariant under every operator. Thus, to make things non-trivial, we say that

Y

is a halfspace whenever it is a closed subspace with infinite dimension and infinite codimension.

The AIHS problem asks whether every operator admits an AIHS. In the complex setting it has already been solved; that is, if

X

is a complex infinite-dimensional Banach space and

T\inl{B}(X)

then

T

admits an AIHS of defect at most 1. It is not currently known whether the same holds if

X

is a real Banach space. However, some partial results have been established: for instance, any self-adjoint operator on an infinite-dimensional real Hilbert space admits an AIHS, as does any strictly singular (or compact) operator acting on a real infinite-dimensional reflexive space.

See also

Sources