In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is
\left[\begin{smallmatrix} 3&0\\ 0&2\end{smallmatrix}\right]
\left[\begin{smallmatrix} 6&0&0\\ 0&5&0\\ 0&0&4 \end{smallmatrix}\right]
\left[\begin{smallmatrix} 0.5&0\\ 0&0.5\end{smallmatrix}\right]
As stated above, a diagonal matrix is a matrix in which all off-diagonal entries are zero. That is, the matrix with columns and rows is diagonal if
However, the main diagonal entries are unrestricted.
The term diagonal matrix may sometimes refer to a , which is an -by- matrix with all the entries not of the form being zero. For example:
More often, however, diagonal matrix refers to square matrices, which can be specified explicitly as a . A square diagonal matrix is a symmetric matrix, so this can also be called a .
The following matrix is square diagonal matrix:
If the entries are real numbers or complex numbers, then it is a normal matrix as well.
In the remainder of this article we will consider only square diagonal matrices, and refer to them simply as "diagonal matrices".
A diagonal matrix
D
a=\begin{bmatrix}a1&...m&
sf{T} | |
a | |
n\end{bmatrix} |
\operatorname{diag}
This may be written more compactly as
D=\operatorname{diag}(a)
The same operator is also used to represent block diagonal matrices as
A=\operatorname{diag}(A1,...,An)
Ai
The
\operatorname{diag}
\circ
1
The inverse matrix-to-vector
\operatorname{diag}
\operatorname{diag}(D)=\begin{bmatrix}a1&...m&
sf{T} | |
a | |
n\end{bmatrix} |
The following property holds:
A diagonal matrix with equal diagonal entries is a scalar matrix; that is, a scalar multiple λ of the identity matrix . Its effect on a vector is scalar multiplication by λ. For example, a 3×3 scalar matrix has the form:
The scalar matrices are the center of the algebra of matrices: that is, they are precisely the matrices that commute with all other square matrices of the same size. By contrast, over a field (like the real numbers), a diagonal matrix with all diagonal elements distinct only commutes with diagonal matrices (its centralizer is the set of diagonal matrices). That is because if a diagonal matrix
D=\operatorname{diag}(a1,...,an)
ai ≠ aj,
M
mij ≠ 0,
(i,j)
(DM)ij=aimij
(MD)ij=mijaj,
ajmij ≠ mijai
mij
For an abstract vector space V (rather than the concrete vector space
Kn
R\to\operatorname{End}(M),
M\congRn
Multiplying a vector by a diagonal matrix multiplies each of the terms by the corresponding diagonal entry. Given a diagonal matrix
D=\operatorname{diag}(a1,...,an)
v=\begin{bmatrix}x1&...m&xn\end{bmatrix}sf{T}
This can be expressed more compactly by using a vector instead of a diagonal matrix,
d=\begin{bmatrix}a1&...m&an\end{bmatrix}sf{T}
d\circv
This is mathematically equivalent, but avoids storing all the zero terms of this sparse matrix. This product is thus used in machine learning, such as computing products of derivatives in backpropagation or multiplying IDF weights in TF-IDF,[2] since some BLAS frameworks, which multiply matrices efficiently, do not include Hadamard product capability directly.[3]
The operations of matrix addition and matrix multiplication are especially simple for diagonal matrices. Write for a diagonal matrix whose diagonal entries starting in the upper left corner are . Then, for addition, we have
and for matrix multiplication,
The diagonal matrix is invertible if and only if the entries are all nonzero. In this case, we have
In particular, the diagonal matrices form a subring of the ring of all -by- matrices.
Multiplying an -by- matrix from the left with amounts to multiplying the -th row of by for all ; multiplying the matrix from the right with amounts to multiplying the -th column of by for all .
See main article: Eigenvalues and eigenvectors.
As explained in determining coefficients of operator matrix, there is a special basis,, for which the matrix
A
ai,j
ai,i
λi
Aei=λiei
In other words, the eigenvalues of are with associated eigenvectors of .
Diagonal matrices occur in many areas of linear algebra. Because of the simple description of the matrix operation and eigenvalues/eigenvectors given above, it is typically desirable to represent a given matrix or linear map by a diagonal matrix.
In fact, a given -by- matrix is similar to a diagonal matrix (meaning that there is a matrix such that is diagonal) if and only if it has linearly independent eigenvectors. Such matrices are said to be diagonalizable.
Over the field of real or complex numbers, more is true. The spectral theorem says that every normal matrix is unitarily similar to a diagonal matrix (if then there exists a unitary matrix such that is diagonal). Furthermore, the singular value decomposition implies that for any matrix, there exist unitary matrices and such that is diagonal with positive entries.
In operator theory, particularly the study of PDEs, operators are particularly easy to understand and PDEs easy to solve if the operator is diagonal with respect to the basis with which one is working; this corresponds to a separable partial differential equation. Therefore, a key technique to understanding operators is a change of coordinates—in the language of operators, an integral transform—which changes the basis to an eigenbasis of eigenfunctions: which makes the equation separable. An important example of this is the Fourier transform, which diagonalizes constant coefficient differentiation operators (or more generally translation invariant operators), such as the Laplacian operator, say, in the heat equation.
Especially easy are multiplication operators, which are defined as multiplication by (the values of) a fixed function–the values of the function at each point correspond to the diagonal entries of a matrix.