In mathematics, a complex square matrix is normal if it commutes with its conjugate transpose :
Anormal\iffA*A=AA*.
The concept of normal matrices can be extended to normal operators on infinite-dimensional normed spaces and to normal elements in C*-algebras. As in the matrix case, normality means commutativity is preserved, to the extent possible, in the noncommutative setting. This makes normal operators, and normal elements of C*-algebras, more amenable to analysis.
The spectral theorem states that a matrix is normal if and only if it is unitarily similar to a diagonal matrix, and therefore any matrix satisfying the equation is diagonalizable. (The converse does not hold because diagonalizable matrices may have non-orthogonal eigenspaces.) Thus
A=UDU*
A*=UD*U*
D
The left and right singular vectors in the singular value decomposition of a normal matrix
A=UDV*
Among complex matrices, all unitary, Hermitian, and skew-Hermitian matrices are normal, with all eigenvalues being unit modulus, real, and imaginary, respectively. Likewise, among real matrices, all orthogonal, symmetric, and skew-symmetric matrices are normal, with all eigenvalues being complex conjugate pairs on the unit circle, real, and imaginary, respectively. However, it is not the case that all normal matrices are either unitary or (skew-)Hermitian, as their eigenvalues can be any complex number, in general. For example,is neither unitary, Hermitian, nor skew-Hermitian, because its eigenvalues are
2,(1\pmi\sqrt{3})/2
The concept of normality is important because normal matrices are precisely those to which the spectral theorem applies:
The diagonal entries of are the eigenvalues of, and the columns of are the eigenvectors of . The matching eigenvalues in come in the same order as the eigenvectors are ordered as columns of .
Another way of stating the spectral theorem is to say that normal matrices are precisely those matrices that can be represented by a diagonal matrix with respect to a properly chosen orthonormal basis of . Phrased differently: a matrix is normal if and only if its eigenspaces span and are pairwise orthogonal with respect to the standard inner product of .
The spectral theorem for normal matrices is a special case of the more general Schur decomposition which holds for all square matrices. Let be a square matrix. Then by Schur decomposition it is unitary similar to an upper-triangular matrix, say, . If is normal, so is . But then must be diagonal, for, as noted above, a normal upper-triangular matrix is diagonal.
The spectral theorem permits the classification of normal matrices in terms of their spectra, for example:
In general, the sum or product of two normal matrices need not be normal. However, the following holds:
In this special case, the columns of are eigenvectors of both and and form an orthonormal basis in . This follows by combining the theorems that, over an algebraically closed field, commuting matrices are simultaneously triangularizable and a normal matrix is diagonalizable – the added result is that these can both be done simultaneously.
It is possible to give a fairly long list of equivalent definitions of a normal matrix. Let be a complex matrix. Then the following are equivalent:
\left\|Ax\right\|=\left\|A*x\right\|
Some but not all of the above generalize to normal operators on infinite-dimensional Hilbert spaces. For example, a bounded operator satisfying (9) is only quasinormal.
It is occasionally useful (but sometimes misleading) to think of the relationships of special kinds of normal matrices as analogous to the relationships of the corresponding type of complex numbers of which their eigenvalues are composed. This is because any function of a non-defective matrix acts directly on each of its eigenvalues, and the conjugate transpose of its spectral decomposition
VDV*
VD*V*
D
\pm1
As a special case, the complex numbers may be embedded in the normal 2×2 real matrices by the mapping which preserves addition and multiplication. It is easy to check that this embedding respects all of the above analogies.
A
P
\overline{λj}=P(λj)
λj
A
A
P
\overline{λj}=P(λj)
λj
A