Polar decomposition explained
is a
factorization of the form
, where
is a
unitary matrix and
is a
positive semi-definite Hermitian matrix (
is an
orthogonal matrix and
is a
positive semi-definite symmetric matrix in the
real case), both square and of the same size.
[1] If a real
matrix
is interpreted as a
linear transformation of
-dimensional
space
, the polar decomposition separates it into a
rotation or
reflection
of
, and a
scaling of the space along a set of
orthogonal axes.
The polar decomposition of a square matrix
always exists. If
is
invertible, the decomposition is unique, and the factor
will be
positive-definite. In that case,
can be written uniquely in the form
, where
is unitary and
is the unique self-adjoint
logarithm of the matrix
.
[2] This decomposition is useful in computing the
fundamental group of (matrix)
Lie groups.
[3] The polar decomposition can also be defined as
where
is a symmetric positive-definite matrix with the same eigenvalues as
but different eigenvectors.
as
, where
is its absolute value (a non-negative
real number), and
is a complex number with unit norm (an element of the
circle group).
The definition
may be extended to rectangular matrices
by requiring
to be a
semi-unitary matrix and
to be a positive-semidefinite Hermitian matrix. The decomposition always exists and
is always unique. The matrix
is unique if and only if
has full rank.
Geometric interpretation
A real square
matrix
can be interpreted as the
linear transformation of
that takes a column vector
to
. Then, in the polar decomposition
, the factor
is an
real orthonormal matrix. The polar decomposition then can be seen as expressing the linear transformation defined by
into a
scaling of the space
along each eigenvector
of
by a scale factor
(the action of
), followed by a rotation of
(the action of
).
Alternatively, the decomposition
expresses the transformation defined by
as a rotation (
) followed by a scaling (
) along certain orthogonal directions. The scale factors are the same, but the directions are different.
Properties
The polar decomposition of the complex conjugate of
is given by
\overline{A}=\overline{U}\overline{P}.
Note that
gives the corresponding polar decomposition of the
determinant of
A, since
and
\detP=r=\left|\detA\right|
. In particular, if
has determinant 1 then both
and
have determinant 1.
The positive-semidefinite matrix P is always unique, even if A is singular, and is denoted aswhere
denotes the
conjugate transpose of
. The uniqueness of
P ensures that this expression is well-defined. The uniqueness is guaranteed by the fact that
is a positive-semidefinite Hermitian matrix and, therefore, has a unique positive-semidefinite Hermitian
square root.
[4] If
A is invertible, then
P is positive-definite, thus also invertible and the matrix
U is uniquely determined by
Relation to the SVD
In terms of the singular value decomposition (SVD) of
,
, one has
where
,
, and
are unitary matrices (called orthogonal matrices if the field is the reals
). This confirms that
is positive-definite and
is unitary. Thus, the existence of the SVD is equivalent to the existence of polar decomposition.
One can also decompose
in the form
Here
is the same as before and
is given by
This is known as the left polar decomposition, whereas the previous decomposition is known as the right polar decomposition. Left polar decomposition is also known as reverse polar decomposition.
The polar decomposition of a square invertible real matrix
is of the form
where
|A|=\left(AAsf{T}\right)1/2
is a
positive-definite matrix and
is an orthogonal matrix.
Relation to normal matrices
The matrix
with polar decomposition
is
normal if and only if
and
commute:
, or equivalently, they are simultaneously diagonalizable.
Construction and proofs of existence
The core idea behind the construction of the polar decomposition is similar to that used to compute the singular-value decomposition.
Derivation for normal matrices
If
is
normal, then it is unitarily equivalent to a diagonal matrix:
for some unitary matrix
and some diagonal matrix
. This makes the derivation of its polar decomposition particularly straightforward, as we can then write
where
is a diagonal matrix containing the
phases of the elements of
, that is,
when
, and
when
.
The polar decomposition is thus
, with
and
diagonal in the eigenbasis of
and having eigenvalues equal to the phases and absolute values of those of
, respectively.
Derivation for invertible matrices
From the singular-value decomposition, it can be shown that a matrix
is invertible if and only if
(equivalently,
) is. Moreover, this is true if and only if the eigenvalues of
are all not zero.
[5] In this case, the polar decomposition is directly obtained by writingand observing that
is unitary. To see this, we can exploit the spectral decomposition of
to write
A\left(A*A\right)-1/2=AVD-1/2V*
.
In this expression,
is unitary because
is. To show that also
is unitary, we can use the
SVD to write
, so that
where again
is unitary by construction.
Yet another way to directly show the unitarity of
is to note that, writing the
SVD of
in terms of rank-1 matrices as
, where
are the singular values of
, we have
which directly implies the unitarity of
because a matrix is unitary if and only if its singular values have unitary absolute value.
Note how, from the above construction, it follows that the unitary matrix in the polar decomposition of an invertible matrix is uniquely defined.
General derivation
The SVD of a square matrix
reads
, with
unitary matrices, and
a diagonal, positive semi-definite matrix. By simply inserting an additional pair of
s or
s, we obtain the two forms of the polar decomposition of
:
More generally, if
is some rectangular
matrix, its SVD can be written as
where now
and
are isometries with dimensions
and
, respectively, where
r\equiv\operatorname{rank}(A)
, and
is again a diagonal positive semi-definite square matrix with dimensions
. We can now apply the same reasoning used in the above equation to write
, but now
is not in general unitary. Nonetheless,
has the same support and range as
, and it satisfies
and
. This makes
into an isometry when its action is restricted onto the support of
, that is, it means that
is a
partial isometry.
As an explicit example of this more general case, consider the SVD of the following matrix:We then havewhich is an isometry, but not unitary. On the other hand, if we consider the decomposition ofwe findwhich is a partial isometry (but not an isometry).
Bounded operators on Hilbert space
The polar decomposition of any bounded linear operator A between complex Hilbert spaces is a canonical factorization as the product of a partial isometry and a non-negative operator.
The polar decomposition for matrices generalizes as follows: if A is a bounded linear operator then there is a unique factorization of A as a product A = UP where U is a partial isometry, P is a non-negative self-adjoint operator and the initial space of U is the closure of the range of P.
The operator U must be weakened to a partial isometry, rather than unitary, because of the following issues. If A is the one-sided shift on l2(N), then |A| = 1/2 = I. So if A = U |A|, U must be A, which is not unitary.
The existence of a polar decomposition is a consequence of Douglas' lemma:
The operator C can be defined by C(Bh) := Ah for all h in H, extended by continuity to the closure of Ran(B), and by zero on the orthogonal complement to all of H. The lemma then follows since AA ≤ BB implies ker(B) ⊂ ker(A).
In particular. If AA = BB, then C is a partial isometry, which is unique if ker(B) ⊂ ker(C).In general, for any bounded operator A,where (AA)1/2 is the unique positive square root of AA given by the usual functional calculus. So by the lemma, we havefor some partial isometry U, which is unique if ker(A) ⊂ ker(U). Take P to be (AA)1/2 and one obtains the polar decomposition A = UP. Notice that an analogous argument can be used to show A = P'U, where P' is positive and U a partial isometry.
When H is finite-dimensional, U can be extended to a unitary operator; this is not true in general (see example above). Alternatively, the polar decomposition can be shown using the operator version of singular value decomposition.
By property of the continuous functional calculus, |A| is in the C*-algebra generated by A. A similar but weaker statement holds for the partial isometry: U is in the von Neumann algebra generated by A. If A is invertible, the polar part U will be in the C*-algebra as well.
Unbounded operators
If A is a closed, densely defined unbounded operator between complex Hilbert spaces then it still has a (unique) polar decompositionwhere |A| is a (possibly unbounded) non-negative self adjoint operator with the same domain as A, and U is a partial isometry vanishing on the orthogonal complement of the range ran(|A|).
The proof uses the same lemma as above, which goes through for unbounded operators in general. If dom(AA) = dom(BB) and AAh = BBh for all h ∈ dom(AA), then there exists a partial isometry U such that A = UB. U is unique if ran(B)⊥ ⊂ ker(U). The operator A being closed and densely defined ensures that the operator AA is self-adjoint (with dense domain) and therefore allows one to define (AA)1/2. Applying the lemma gives polar decomposition.
If an unbounded operator A is affiliated to a von Neumann algebra M, and A = UP is its polar decomposition, then U is in M and so is the spectral projection of P, 1B(P), for any Borel set B in .
Quaternion polar decomposition
The polar decomposition of quaternions
with
orthonormal basis quaternions
1,\widehat{i},\widehat{j},\widehat{k}
depends on the unit 2-dimensional sphere
\widehat{r}\in\lbrace x \widehat{i}+y \widehat{j}+z \widehat{k}\inH\smallsetminusR : x2+y2+z2=1 \rbrace
of square roots of minus one, known as
right versors. Given any
on this sphere, and an angle the
versor ea \widehat{}{{=}}\cos(a)+\widehat{r} \sin(a)
is on the unit
3-sphere of
For and the versor is 1 or −1, regardless of which is selected. The
norm of a quaternion is the
Euclidean distance from the origin to . When a quaternion is not just a real number, then there is a
unique polar decomposition:
q=t \exp\left( a \widehat{r} \right)~.
Here,, are all uniquely determined such that is a right versor satisfies and
Alternative planar decompositions
In the Cartesian plane, alternative planar ring decompositions arise as follows:
Numerical determination of the matrix polar decomposition
To compute an approximation of the polar decomposition A = UP, usually the unitary factor U is approximated. The iteration is based on Heron's method for the square root of 1 and computes, starting from
, the sequence
The combination of inversion and Hermite conjugation is chosen so that in the singular value decomposition, the unitary factors remain the same and the iteration reduces to Heron's method on the singular values.
This basic iteration may be refined to speed up the process:
See also
References
Notes and References
- Section 2.5
- Theorem 2.17
- Section 13.3
- Lemma 2.18
- Note how this implies, by the positivity of
, that the eigenvalues are all real and strictly positive.