Square matrix explained
In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order Any two square matrices of the same order can be added and multiplied.
Square matrices are often used to represent simple linear transformations, such as shearing or rotation. For example, if
is a square matrix representing a rotation (
rotation matrix) and
is a
column vector describing the
position of a point in space, the product
yields another column vector describing the position of that point after that rotation. If
is a
row vector, the same transformation can be obtained using where
is the
transpose of
Main diagonal
See main article: Main diagonal. The entries
form the
main diagonal of a square matrix. They lie on the imaginary line which runs from the top left corner to the bottom right corner of the matrix. For instance, the main diagonal of the 4×4 matrix above contains the elements,,, .
The diagonal of a square matrix from the top right to the bottom left corner is called antidiagonal or counterdiagonal.
Special kinds
Name | Example with n = 3 |
---|
| \begin{bmatrix}
a11&0&0\\
0&a22&0\\
0&0&a33\end{bmatrix}
|
| \begin{bmatrix}
a11&0&0\\
a21&a22&0\\
a31&a32&a33\end{bmatrix}
|
| \begin{bmatrix}
a11&a12&a13\\
0&a22&a23\\
0&0&a33\end{bmatrix}
| |
Diagonal or triangular matrix
If all entries outside the main diagonal are zero,
is called a
diagonal matrix. If all entries below (resp above) the main diagonal are zero,
is called an upper (resp lower)
triangular matrix.
Identity matrix
of size
is the
matrix in which all the elements on the
main diagonal are equal to 1 and all other elements are equal to 0, e.g.
It is a square matrix of order and also a special kind of
diagonal matrix. The term
identity matrix refers to the property of matrix multiplication that
for any
matrix
Invertible matrix and its inverse
A square matrix
is called
invertible or
non-singular if there exists a matrix
such that
If
exists, it is unique and is called the
inverse matrix of denoted
Symmetric or skew-symmetric matrix
A square matrix
that is equal to its transpose, i.e., is a
symmetric matrix. If instead then
is called a
skew-symmetric matrix.
For a complex square matrix often the appropriate analogue of the transpose is the conjugate transpose defined as the transpose of the complex conjugate of A complex square matrix
satisfying
is called a
Hermitian matrix. If instead then
is called a
skew-Hermitian matrix.
By the spectral theorem, real symmetric (or complex Hermitian) matrices have an orthogonal (or unitary) eigenbasis; i.e., every vector is expressible as a linear combination of eigenvectors. In both cases, all eigenvalues are real.
Definite matrix
| Indefinite |
---|
\begin{bmatrix}
1/4&0\\
0&1\\
\end{bmatrix}
| \begin{bmatrix}
1/4&0\\
0&-1/4\end{bmatrix}
|
| |
Points such that (Ellipse). | Points such that (Hyperbola). | |
A symmetric -matrix is called
positive-definite (respectively negative-definite; indefinite), if for all nonzero vectors
the associated
quadratic form given by
takes only positive values (respectively only negative values; both some negative and some positive values). If the quadratic form takes only non-negative (respectively only non-positive) values, the symmetric matrix is called positive-semidefinite (respectively negative-semidefinite); hence the matrix is indefinite precisely when it is neither positive-semidefinite nor negative-semidefinite.
A symmetric matrix is positive-definite if and only if all its eigenvalues are positive. The table at the right shows two possibilities for 2×2 matrices.
Allowing as input two different vectors instead yields the bilinear form associated to :
Orthogonal matrix
An orthogonal matrix is a square matrix with real entries whose columns and rows are orthogonal unit vectors (i.e., orthonormal vectors). Equivalently, a matrix A is orthogonal if its transpose is equal to its inverse:which entailswhere I is the identity matrix.
consists of the orthogonal matrices with
determinant +1.
The complex analogue of an orthogonal matrix is a unitary matrix.
Normal matrix
A real or complex square matrix
is called
normal if If a real square matrix is symmetric, skew-symmetric, or orthogonal, then it is normal. If a complex square matrix is Hermitian, skew-Hermitian, or unitary, then it is normal. Normal matrices are of interest mainly because they include the types of matrices just listed and form the broadest class of matrices for which the
spectral theorem holds.
[1] Operations
Trace
The trace, tr(A) of a square matrix A is the sum of its diagonal entries. While matrix multiplication is not commutative, the trace of the product of two matrices is independent of the order of the factors: This is immediate from the definition of matrix multiplication:Also, the trace of a matrix is equal to that of its transpose, i.e.,
Determinant
See main article: Determinant.
The determinant
or
of a square matrix
is a number encoding certain properties of the matrix. A matrix is invertible
if and only if its determinant is nonzero. Its
absolute value equals the area (in
) or volume (in
) of the image of the unit square (or cube), while its sign corresponds to the orientation of the corresponding linear map: the determinant is positive if and only if the orientation is preserved.
The determinant of 2×2 matrices is given byThe determinant of 3×3 matrices involves 6 terms (rule of Sarrus). The more lengthy Leibniz formula generalizes these two formulae to all dimensions.
The determinant of a product of square matrices equals the product of their determinants:Adding a multiple of any row to another row, or a multiple of any column to another column, does not change the determinant. Interchanging two rows or two columns affects the determinant by multiplying it by −1. Using these operations, any matrix can be transformed to a lower (or upper) triangular matrix, and for such matrices the determinant equals the product of the entries on the main diagonal; this provides a method to calculate the determinant of any matrix. Finally, the Laplace expansion expresses the determinant in terms of minors, i.e., determinants of smaller matrices. This expansion can be used for a recursive definition of determinants (taking as starting case the determinant of a 1×1 matrix, which is its unique entry, or even the determinant of a 0×0 matrix, which is 1), that can be seen to be equivalent to the Leibniz formula. Determinants can be used to solve linear systems using Cramer's rule, where the division of the determinants of two related square matrices equates to the value of each of the system's variables.
Eigenvalues and eigenvectors
See main article: Eigenvalues and eigenvectors. A number and a non-zero vector
satisfying
are called an
eigenvalue and an
eigenvector of respectively.
[2] The number is an eigenvalue of an -matrix if and only if is not invertible, which is
equivalent to
The polynomial in an
indeterminate given by evaluation of the determinant is called the
characteristic polynomial of . It is a
monic polynomial of
degree n. Therefore the polynomial equation has at most
n different solutions, i.e., eigenvalues of the matrix. They may be complex even if the entries of are real. According to the
Cayley–Hamilton theorem,, that is, the result of substituting the matrix itself into its own characteristic polynomial yields the
zero matrix.
See also
Notes and References
- Artin, Algebra, 2nd edition, Pearson, 2018, section 8.6.
- Eigen means "own" in German and in Dutch.