In mathematics, an invariant subspace of a linear mapping T : V → V i.e. from some vector space V to itself, is a subspace W of V that is preserved by T. More generally, an invariant subspace for a collection of linear mappings is a subspace preserved by each mapping individually.
Consider a vector space
V
T:V\toV.
W\subseteqV
T
v\inW
In this case, restricts to an endomorphism of :
The existence of an invariant subspace also has a matrix formulation. Pick a basis C for W and complete it to a basis B of V. With respect to, the operator has form for some and .
Any linear map
T:V\toV
V
T
V
V.
\{0\}
T(0)=0
If is a 1-dimensional invariant subspace for operator with vector, then the vectors and must be linearly dependent. Thus In fact, the scalar does not depend on .
The equation above formulates an eigenvalue problem. Any eigenvector for spans a 1-dimensional invariant subspace, and vice-versa. In particular, a nonzero invariant vector (i.e. a fixed point of T) spans an invariant subspace of dimension 1.
As a consequence of the fundamental theorem of algebra, every linear operator on a nonzero finite-dimensional complex vector space has an eigenvector. Therefore, every such linear operator in at least two dimensions has a proper non-trivial invariant subspace.
Determining whether a given subspace W is invariant under T is ostensibly a problem of geometric nature. Matrix representation allows one to phrase this problem algebraically.
Write as the direct sum ; a suitable can always be chosen by extending a basis of . The associated projection operator P onto W has matrix representation
P=\begin{bmatrix}1&0\ 0&0\end{bmatrix}:\begin{matrix}W\ ⊕ \ W'\end{matrix} → \begin{matrix}W\ ⊕ \ W'\end{matrix}.
A straightforward calculation shows that W is -invariant if and only if PTP = TP.
If 1 is the identity operator, then is projection onto . The equation holds if and only if both im(P) and im(1 − P) are invariant under T. In that case, T has matrix representation
Colloquially, a projection that commutes with T "diagonalizes" T.
As the above examples indicate, the invariant subspaces of a given linear transformation T shed light on the structure of T. When V is a finite-dimensional vector space over an algebraically closed field, linear transformations acting on V are characterized (up to similarity) by the Jordan canonical form, which decomposes V into invariant subspaces of T. Many fundamental questions regarding T can be translated to questions about invariant subspaces of T.
The set of -invariant subspaces of is sometimes called the invariant-subspace lattice of and written . As the name suggests, it is a (modular) lattice, with meets and joins given by (respectively) set intersection and linear span. A minimal element in in said to be a minimal invariant subspace.
In the study of infinite-dimensional operators, is sometimes restricted to only the closed invariant subspaces.
Given a collection of operators, a subspace is called -invariant if it is invariant under each .
As in the single-operator case, the invariant-subspace lattice of, written, is the set of all -invariant subspaces, and bears the same meet and join operations. Set-theoretically, it is the intersection
Let be the set of all linear operators on . Then .
Given a representation of a group G on a vector space V, we have a linear transformation T(g) : V → V for every element g of G. If a subspace W of V is invariant with respect to all these transformations, then it is a subrepresentation and the group G acts on W in a natural way. The same construction applies to representations of an algebra.
As another example, let and be the algebra generated by, where 1 is the identity operator. Then Lat(T) = Lat(Σ).
Just as the fundamental theorem of algebra ensures that every linear transformation acting on a finite-dimensional complex vector space has a non-trivial invariant subspace, the fundamental theorem of noncommutative algebra asserts that Lat(Σ) contains non-trivial elements for certain Σ.One consequence is that every commuting family in L(V) can be simultaneously upper-triangularized. To see this, note that an upper-triangular matrix representation corresponds to a flag of invariant subspaces, that a commuting family generates a commuting algebra, and that is not commutative when .
If A is an algebra, one can define a left regular representation Φ on A: Φ(a)b = ab is a homomorphism from A to L(A), the algebra of linear transformations on A
The invariant subspaces of Φ are precisely the left ideals of A. A left ideal M of A gives a subrepresentation of A on M.
If M is a left ideal of A then the left regular representation Φ on M now descends to a representation Φ' on the quotient vector space A/M. If [''b''] denotes an equivalence class in A/M, Φ'(a)[''b''] = [''ab'']. The kernel of the representation Φ' is the set .
The representation Φ' is irreducible if and only if M is a maximal left ideal, since a subspace V ⊂ A/M is an invariant under if and only if its preimage under the quotient map, V + M, is a left ideal in A.
See main article: Invariant subspace problem.
The invariant subspace problem concerns the case where V is a separable Hilbert space over the complex numbers, of dimension > 1, and T is a bounded operator. The problem is to decide whether every such T has a non-trivial, closed, invariant subspace. It is unsolved.
In the more general case where V is assumed to be a Banach space, Per Enflo (1976) found an example of an operator without an invariant subspace. A concrete example of an operator without an invariant subspace was produced in 1985 by Charles Read.
Related to invariant subspaces are so-called almost-invariant-halfspaces (AIHS's). A closed subspace
Y
X
T\inl{B}(X)
TY\subseteqY+E
E
Y
T
F\inl{B}(X)
(T+F)Y\subseteqY
Y
T+F
E
F
Clearly, every finite-dimensional and finite-codimensional subspace is almost-invariant under every operator. Thus, to make things non-trivial, we say that
Y
The AIHS problem asks whether every operator admits an AIHS. In the complex setting it has already been solved; that is, if
X
T\inl{B}(X)
T
X