In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called if all the entries above the main diagonal are zero. Similarly, a square matrix is called if all the entries below the main diagonal are zero.
Because matrix equations with triangular matrices are easier to solve, they are very important in numerical analysis. By the LU decomposition algorithm, an invertible matrix may be written as the product of a lower triangular matrix L and an upper triangular matrix U if and only if all its leading principal minors are non-zero.
A matrix of the form
L=\begin{bmatrix} \ell1,1&&&&0\\ \ell2,1&\ell2,2&&&\\ \ell3,1&\ell3,2&\ddots&&\\ \vdots&\vdots&\ddots&\ddots&\\ \elln,1&\elln,2&\ldots&\elln,n-1&\elln,n\end{bmatrix}
is called a lower triangular matrix or left triangular matrix, and analogously a matrix of the form
U=\begin{bmatrix} u1,1&u1,2&u1,3&\ldots&u1,n\\ &u2,2&u2,3&\ldots&u2,n\\ &&\ddots&\ddots&\vdots\\ &&&\ddots&un-1,n\\ 0&&&&un,n\end{bmatrix}
is called an upper triangular matrix or right triangular matrix. A lower or left triangular matrix is commonly denoted with the variable L, and an upper or right triangular matrix is commonly denoted with the variable U or R.
A matrix that is both upper and lower triangular is diagonal. Matrices that are similar to triangular matrices are called triangularisable.
A non-square (or sometimes any) matrix with zeros above (below) the diagonal is called a lower (upper) trapezoidal matrix. The non-zero entries form the shape of a trapezoid.
The matrix
\begin{bmatrix} 1&0&0\\ 2&96&0\\ 4&9&69\\ \end{bmatrix}
is lower triangular, and
\begin{bmatrix} 1&4&1\\ 0&6&9\\ 0&0&1\\ \end{bmatrix}
is upper triangular.
A matrix equation in the form
Lx=b
Ux=b
x1
x2
xn
xn
xn-1
x1
Notice that this does not require inverting the matrix.
The matrix equation Lx = b can be written as a system of linear equations
\begin{matrix} \ell1,1x1&&&&&&&=&b1\\ \ell2,1x1&+&\ell2,2x2&&&&&=&b2\\ \vdots&&\vdots&&\ddots&&&&\vdots\\ \ellm,1x1&+&\ellm,2x2&+&...b&+&\ellm,mxm&=&bm\\ \end{matrix}
Observe that the first equation (
\ell1,1x1=b1
x1
x1
x1
x2
x1
k
x1,...,xk
xk
x1,...,xk-1
\begin{align} x1&=
b1 | |
\ell1,1 |
,\\ x2&=
b2-\ell2,1x1 | |
\ell2,2 |
,\\ & \vdots\\ xm&=
| |||||||||||||
\ellm,m |
. \end{align}
A matrix equation with an upper triangular matrix U can be solved in an analogous way, only working backwards.
Forward substitution is used in financial bootstrapping to construct a yield curve.
The transpose of an upper triangular matrix is a lower triangular matrix and vice versa.
A matrix which is both symmetric and triangular is diagonal.In a similar vein, a matrix which is both normal (meaning A*A = AA*, where A* is the conjugate transpose) and triangular is also diagonal. This can be seen by looking at the diagonal entries of A*A and AA*.
The determinant and permanent of a triangular matrix equal the product of the diagonal entries, as can be checked by direct computation.
pA(x)=\det(xI-A)
pA(x)=(x-a11)(x-a22) … (x-ann)
xI-A
\det(xI-A)
(x-a11)(x-a22) … (x-ann)
If the entries on the main diagonal of a (upper or lower) triangular matrix are all 1, the matrix is called (upper or lower) unitriangular.
Other names used for these matrices are unit (upper or lower) triangular, or very rarely normed (upper or lower) triangular. However, a unit triangular matrix is not the same as the unit matrix, and a normed triangular matrix has nothing to do with the notion of matrix norm.
All finite unitriangular matrices are unipotent.
If all of the entries on the main diagonal of a (upper or lower) triangular matrix are also 0, the matrix is called strictly (upper or lower) triangular.
All finite strictly triangular matrices are nilpotent of index at most n as a consequence of the Cayley-Hamilton theorem.
See main article: Frobenius matrix. An atomic (upper or lower) triangular matrix is a special form of unitriangular matrix, where all of the off-diagonal elements are zero, except for the entries in a single column. Such a matrix is also called a Frobenius matrix, a Gauss matrix, or a Gauss transformation matrix.
See main article: Block matrix. A block triangular matrix is a block matrix (partitioned matrix) that is a triangular matrix.
A matrix
A
A=\begin{bmatrix} A11&A12& … &A1k\\ 0&A22& … &A2k\\ \vdots&\vdots&\ddots&\vdots\\ 0&0& … &Akk\end{bmatrix}
where
Aij\in
ni x nj | |
F |
i,j=1,\ldots,k
A matrix
A
A=\begin{bmatrix} A11&0& … &0\\ A21&A22& … &0\\ \vdots&\vdots&\ddots&\vdots\\ Ak1&Ak2& … &Akk\end{bmatrix}
where
Aij\in
ni x nj | |
F |
i,j=1,\ldots,k
A matrix that is similar to a triangular matrix is referred to as triangularizable. Abstractly, this is equivalent to stabilizing a flag: upper triangular matrices are precisely those that preserve the standard flag, which is given by the standard ordered basis
(e1,\ldots,en)
0<\left\langlee1\right\rangle<\left\langlee1,e2\right\rangle< … <\left\langlee1,\ldots,en\right\rangle=Kn.
Any complex square matrix is triangularizable.[1] In fact, a matrix A over a field containing all of the eigenvalues of A (for example, any matrix over an algebraically closed field) is similar to a triangular matrix. This can be proven by using induction on the fact that A has an eigenvector, by taking the quotient space by the eigenvector and inducting to show that A stabilizes a flag, and is thus triangularizable with respect to a basis for that flag.
A more precise statement is given by the Jordan normal form theorem, which states that in this situation, A is similar to an upper triangular matrix of a very particular form. The simpler triangularization result is often sufficient however, and in any case used in proving the Jordan normal form theorem.[1] [3]
In the case of complex matrices, it is possible to say more about triangularization, namely, that any square matrix A has a Schur decomposition. This means that A is unitarily equivalent (i.e. similar, using a unitary matrix as change of basis) to an upper triangular matrix; this follows by taking an Hermitian basis for the flag.
A set of matrices
A1,\ldots,Ak
Ai,
K[A1,\ldots,Ak].
A,B
A1,\ldots,Ak
The fact that commuting matrices have a common eigenvector can be interpreted as a result of Hilbert's Nullstellensatz: commuting matrices form a commutative algebra
K[A1,\ldots,Ak]
K[x1,\ldots,xk]
This is generalized by Lie's theorem, which shows that any representation of a solvable Lie algebra is simultaneously upper triangularizable, the case of commuting matrices being the abelian Lie algebra case, abelian being a fortiori solvable.
More generally and precisely, a set of matrices
A1,\ldots,Ak
p(A1,\ldots,Ak)[Ai,Aj]
[Ai,Aj]
Ai
[Ai,Aj]
Ak
Upper triangularity is preserved by many operations:
Together these facts mean that the upper triangular matrices form a subalgebra of the associative algebra of square matrices for a given size. Additionally, this also shows that the upper triangular matrices can be viewed as a Lie subalgebra of the Lie algebra of square matrices of a fixed size, where the Lie bracket [''a'', ''b''] given by the commutator . The Lie algebra of all upper triangular matrices is a solvable Lie algebra. It is often referred to as a Borel subalgebra of the Lie algebra of all square matrices.
All these results hold if upper triangular is replaced by lower triangular throughout; in particular the lower triangular matrices also form a Lie algebra. However, operations mixing upper and lower triangular matrices do not in general produce triangular matrices. For instance, the sum of an upper and a lower triangular matrix can be any matrix; the product of a lower triangular with an upper triangular matrix is not necessarily triangular either.
The set of unitriangular matrices forms a Lie group.
The set of strictly upper (or lower) triangular matrices forms a nilpotent Lie algebra, denoted
ak{n}.
ak{b}
ak{n}=[ak{b},ak{b}].
ak{n}
In fact, by Engel's theorem, any finite-dimensional nilpotent Lie algebra is conjugate to a subalgebra of the strictly upper triangular matrices, that is to say, a finite-dimensional nilpotent Lie algebra is simultaneously strictly upper triangularizable.
Algebras of upper triangular matrices have a natural generalization in functional analysis which yields nest algebras on Hilbert spaces.
See also: Affine group.
See main article: Borel subgroup and Borel subalgebra. The set of invertible triangular matrices of a given kind (upper or lower) forms a group, indeed a Lie group, which is a subgroup of the general linear group of all invertible matrices. A triangular matrix is invertible precisely when its diagonal entries are invertible (non-zero).
Over the real numbers, this group is disconnected, having
2n
\pm1
ak{b}
The upper triangular matrices are precisely those that stabilize the standard flag. The invertible ones among them form a subgroup of the general linear group, whose conjugate subgroups are those defined as the stabilizer of some (other) complete flag. These subgroups are Borel subgroups. The group of invertible lower triangular matrices is such a subgroup, since it is the stabilizer of the standard flag associated to the standard basis in reverse order.
The stabilizer of a partial flag obtained by forgetting some parts of the standard flag can be described as a set of block upper triangular matrices (but its elements are not all triangular matrices). The conjugates of such a group are the subgroups defined as the stabilizer of some partial flag. These subgroups are called parabolic subgroups.
The group of 2×2 upper unitriangular matrices is isomorphic to the additive group of the field of scalars; in the case of complex numbers it corresponds to a group formed of parabolic Möbius transformations; the 3×3 upper unitriangular matrices form the Heisenberg group.