In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized (that is, represented as a diagonal matrix in some basis). This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.
Examples of operators to which the spectral theorem applies are self-adjoint operators or more generally normal operators on Hilbert spaces.
The spectral theorem also provides a canonical decomposition, called the spectral decomposition, of the underlying vector space on which the operator acts.
Augustin-Louis Cauchy proved the spectral theorem for symmetric matrices, i.e., that every real, symmetric matrix is diagonalizable. In addition, Cauchy was the first to be systematic about determinants.[1] [2] The spectral theorem as generalized by John von Neumann is today perhaps the most important result of operator theory.
This article mainly focuses on the simplest kind of spectral theorem, that for a self-adjoint operator on a Hilbert space. However, as noted above, the spectral theorem also holds for normal operators on a Hilbert space.
We begin by considering a Hermitian matrix on
Cn
\langle ⋅ , ⋅ \rangle~.
A
An equivalent condition is that, where is the Hermitian conjugate of . In the case that is identified with a Hermitian matrix, the matrix of is equal to its conjugate transpose. (If is a real matrix, then this is equivalent to, that is, is a symmetric matrix.)
This condition implies that all eigenvalues of a Hermitian map are real: To see this, it is enough to apply it to the case when is an eigenvector. (Recall that an eigenvector of a linear map is a non-zero vector such that for some scalar . The value is the corresponding eigenvalue. Moreover, the eigenvalues are roots of the characteristic polynomial.)
We provide a sketch of a proof for the case where the underlying field of scalars is the complex numbers.
By the fundamental theorem of algebra, applied to the characteristic polynomial of, there is at least one complex eigenvalue and corresponding eigenvector which must by definition be non-zero. Then since we find that is real. Now consider the space
l{K}n-1=
\perp , | |
span\left( v | |
1 \right) |
l{K}n-1
k\inl{K}n-1
\langle k,v1 \rangle=0
l{K}n-1~.
A(k)\inl{K}n-1~.
\langle A(k),v1 \rangle=\langle k,A(v1) \rangle=\langle k,λ1 v1 \rangle=0~.
l{K}n-1
λ2
v2\inl{K}n-1\perpv1~.
l{K}n-2=span\left( \{v1,
\perp | |
v | |
2\} \right) |
~.
The matrix representation of in a basis of eigenvectors is diagonal, and by the construction the proof gives a basis of mutually orthogonal eigenvectors; by choosing them to be unit vectors one obtains an orthonormal basis of eigenvectors. can be written as a linear combination of pairwise orthogonal projections, called its spectral decomposition. Letbe the eigenspace corresponding to an eigenvalue
λ~.
Vλ
λ
A~.
When the matrix being decomposed is Hermitian, the spectral decomposition is a special case of the Schur decomposition (see the proof in case of normal matrices below).
The spectral decomposition is a special case of the singular value decomposition, which states that any matrix
A\inCm
A=U \Sigma V* ,
U\inCm
V\inCn
\Sigma\inRm
\Sigma
A
A~.
A
A*=A
V \Sigma U*=U \Sigma V*
U=V~.
See main article: Normal matrix. The spectral theorem extends to a more general class of matrices. Let be an operator on a finite-dimensional inner product space. is said to be normal if
One can show that is normal if and only if it is unitarily diagonalizable using the Schur decomposition. That is, any matrix can be written as where is unitary and is upper triangular.If is normal, then one sees that Therefore, must be diagonal since a normal upper triangular matrix is diagonal (see normal matrix). The converse is obvious.
In other words, is normal if and only if there exists a unitary matrix such thatwhere is a diagonal matrix. Then, the entries of the diagonal of are the eigenvalues of . The column vectors of are the eigenvectors of and they are orthonormal. Unlike the Hermitian case, the entries of need not be real.
In the more general setting of Hilbert spaces, which may have an infinite dimension, the statement of the spectral theorem for compact self-adjoint operators is virtually the same as in the finite-dimensional case.
As for Hermitian matrices, the key point is to prove the existence of at least one nonzero eigenvector. One cannot rely on determinants to show existence of eigenvalues, but one can use a maximization argument analogous to the variational characterization of eigenvalues.
If the compactness assumption is removed, then it is not true that every self-adjoint operator has eigenvectors. For example, the multiplication operator
Mx
L2([0,1])
\psi(x)\inL2([0,1])
x\psi(x)
[0,1]
See also: Eigenfunction.
The next generalization we consider is that of bounded self-adjoint operators on a Hilbert space. Such operators may have no eigenvectors: for instance let be the operator of multiplication by on
L2([0,1])
This operator does not have any eigenvectors in
L2([0,1])
\varphi(t)=\delta(t-t0)
\delta
A
In the absence of (true) eigenvectors, one can look for a "spectral subspace" consisting of an almost eigenvector, i.e, a closed subspace
VE
H
E\subset\sigma(A)
A
A
E
[A\varphi](t)=t\varphi(t),
[a,a+\varepsilon]
[0,1]
A
\varphi
A\varphi
a\varphi
One formulation of the spectral theorem expresses the operator as an integral of the coordinate function over the operator's spectrum
\sigma(A)
When the self-adjoint operator in question is compact, this version of the spectral theorem reduces to something similar to the finite-dimensional spectral theorem above, except that the operator is expressed as a finite or countably infinite linear combination of projections, that is, the measure consists only of atoms.
An alternative formulation of the spectral theorem says that every bounded self-adjoint operator is unitarily equivalent to a multiplication operator. The significance of this result is that multiplication operators are in many ways easy to understand.
The spectral theorem is the beginning of the vast research area of functional analysis called operator theory; see also the spectral measure.
There is also an analogous spectral theorem for bounded normal operators on Hilbert spaces. The only difference in the conclusion is that now may be complex-valued.
There is also a formulation of the spectral theorem in terms of direct integrals. It is similar to the multiplication-operator formulation, but more canonical.
Let
A
\sigma(A)
A
A
\mu
\sigma(A)
\{Hλ\},λ\in\sigma(A).
s(λ),λ\in\sigma(A),
s(λ)\inHλ
λ
The spaces
Hλ
A
λ
Hλ
Hλ
Hλ
Although both the multiplication-operator and direct integral formulations of the spectral theorem express a self-adjoint operator as unitarily equivalent to a multiplication operator, the direct integral approach is more canonical. First, the set over which the direct integral takes place (the spectrum of the operator) is canonical. Second, the function we are multiplying by is canonical in the direct-integral approach: Simply the function
λ\mapstoλ
A vector
\varphi
A
\varphi,A\varphi,A2\varphi,\ldots
A
\mu
\sigma(A)
A
A
λ
L2(\sigma(A),\mu)
A
L2(\sigma(A),\mu)
Hλ
C
Not every bounded self-adjoint operator admits a cyclic vector; indeed, by the uniqueness in the direct integral decomposition, this can occur only when all the
Hλ
A
Although not every
A
A
One important application of the spectral theorem (in whatever form) is the idea of defining a functional calculus. That is, given a function
f
A
f(A)
f
f(x)=xn
f(A)
n
A
An
f
f(A)
f
Hλ
f(A)
f(λ)
Many important linear operators which occur in analysis, such as differential operators, are unbounded. There is also a spectral theorem for self-adjoint operators that applies in these cases. To give an example, every constant-coefficient differential operator is unitarily equivalent to a multiplication operator. Indeed, the unitary operator that implements this equivalence is the Fourier transform; the multiplication operator is a type of Fourier multiplier.
In general, spectral theorem for self-adjoint operators may take several equivalent forms.[9] Notably, all of the formulations given in the previous section for bounded self-adjoint operators—the projection-valued measure version, the multiplication-operator version, and the direct-integral version—continue to hold for unbounded self-adjoint operators, with small technical modifications to deal with domain issues. Specifically, the only reason the multiplication operator
A
L2([0,1])
[0,1]
L2(R)
The notion of "generalized eigenvectors" naturally extends to unbounded self-adjoint operators, as they are characterized as non-normalizable eigenvectors. Contrary to the case of almost eigenvectors, however, the eigenvalues can be real or complex and, even if they are real, do not necessarily belong to the spectrum. Though, for self-adjoint operators there always exist a real subset of "generalized eigenvalues" such that the corresponding set of eigenvectors is complete.