In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices (first minors) are required for calculating matrix cofactors, which in turn are useful for computing both the determinant and inverse of square matrices. The requirement that the square matrix be smaller than the original matrix is often omitted in the definition.
If A is a square matrix, then the minor of the entry in the ith row and jth column (also called the (i, j) minor, or a first minor[1]) is the determinant of the submatrix formed by deleting the ith row and jth column. This number is often denoted Mi,j. The (i, j) cofactor is obtained by multiplying the minor by
(-1)i+j
To illustrate these definitions, consider the following 3 by 3 matrix,
\begin{bmatrix} 1&4&7\\ 3&0&5\\ -1&9&11\\ \end{bmatrix}
To compute the minor M2,3 and the cofactor C2,3, we find the determinant of the above matrix with row 2 and column 3 removed.
M2,3=\det\begin{bmatrix} 1&4&\Box\\ \Box&\Box&\Box\\ -1&9&\Box\\ \end{bmatrix}=\det\begin{bmatrix} 1&4\\ -1&9\\ \end{bmatrix}=9-(-4)=13
So the cofactor of the (2,3) entry is
C2,3=(-1)2+3(M2,3)=-13.
Let A be an m × n matrix and k an integer with 0 < k ≤ m, and k ≤ n. A k × k minor of A, also called minor determinant of order k of A or, if m = n, (n−k)th minor determinant of A (the word "determinant" is often omitted, and the word "degree" is sometimes used instead of "order") is the determinant of a k × k matrix obtained from A by deleting m−k rows and n−k columns. Sometimes the term is used to refer to the k × k matrix obtained from A as above (by deleting m−k rows and n−k columns), but this matrix should be referred to as a (square) submatrix of A, leaving the term "minor" to refer to the determinant of this matrix. For a matrix A as above, there are a total of minors of size k × k. The minor of order zero is often defined to be 1. For a square matrix, the zeroth minor is just the determinant of the matrix.[2]
Let
1\lei1<i2< … <ik\lem
1\lej1<j2< … <jk\len
\detI,JA
\detAI,
[A]I,J
MI,J
M | |
i1,i2,\ldots,ik,j1,j2,\ldots,jk |
M(i),(j)
(i)
The complement, Bijk...,pqr..., of a minor, Mijk...,pqr..., of a square matrix, A, is formed by the determinant of the matrix A from which all the rows (ijk...) and columns (pqr...) associated with Mijk...,pqr... have been removed. The complement of the first minor of an element aij is merely that element.[4]
See main article: Laplace expansion.
The cofactors feature prominently in Laplace's formula for the expansion of determinants, which is a method of computing larger determinants in terms of smaller ones. Given an matrix
A=(aij)
Cij=(-1)i+jMij
\det(A)=a1jC1j+a2jC2j+a3jC3j+ … +anjCnj=
n | |
\sum | |
i=1 |
aijCij=
n | |
\sum | |
i=1 |
aij(-1)i+jMij
The cofactor expansion along the ith row gives:
\det(A)=ai1Ci1+ai2Ci2+ai3Ci3+ … +ainCin=
n | |
\sum | |
j=1 |
aijCij
n | |
=\sum | |
j=1 |
aij(-1)i+jMij
See main article: Invertible matrix.
One can write down the inverse of an invertible matrix by computing its cofactors by using Cramer's rule, as follows. The matrix formed by all of the cofactors of a square matrix A is called the cofactor matrix (also called the matrix of cofactors or, sometimes, comatrix):
C=\begin{bmatrix} C11&C12& … &C1n\\ C21&C22& … &C2n\\ \vdots&\vdots&\ddots&\vdots\\ Cn1&Cn2& … &Cnn\end{bmatrix}
Then the inverse of A is the transpose of the cofactor matrix times the reciprocal of the determinant of A:
A-1=
1 | |
\operatorname{det |
(A)}CT.
The transpose of the cofactor matrix is called the adjugate matrix (also called the classical adjoint) of A.
The above formula can be generalized as follows: Let
1\lei1<i2<\ldots<ik\len
1\lej1<j2<\ldots<jk\len
[A-1]I,J=\pm
[A]J',I' | |
\detA |
,
where I′, J′ denote the ordered sequences of indices (the indices are in natural order of magnitude, as above) complementary to I, J, so that every index 1, ..., n appears exactly once in either I or I′, but not in both (similarly for the J and J′) and
[A]I,J
[A]I,J=\det\left(
(A | |
ip,jq |
)p,q\right)
[A-1]I,J(e1\wedge\ldots\wedgeen)=\pm(A-1
e | |
j1 |
)\wedge\ldots\wedge(A-1
e | |
jk |
)\wedge
e | |
i'1 |
\wedge\ldots\wedge
e | |
i'n-k |
,
where
e1,\ldots,en
[A-1]I,J\detA(e1\wedge\ldots\wedgeen)=\pm
(e | |
j1 |
)\wedge\ldots
\wedge(e | |
jk |
)\wedge
(Ae | |
i'1 |
)\wedge\ldots\wedge
(Ae | |
i'n-k |
)=\pm[A]J',I'(e1\wedge\ldots\wedgeen).
The sign can be worked out to be
| |||||||||||||||
(-1) |
Given an m × n matrix with real entries (or entries from any other field) and rank r, then there exists at least one non-zero r × r minor, while all larger minors are zero.
We will use the following notation for minors: if A is an m × n matrix, I is a subset of with k elements, and J is a subset of with k elements, then we write ['''A''']I,J for the minor of A that corresponds to the rows with index in I and the columns with index in J.
Both the formula for ordinary matrix multiplication and the Cauchy–Binet formula for the determinant of the product of two matrices are special cases of the following general statement about the minors of a product of two matrices.Suppose that A is an m × n matrix, B is an n × p matrix, I is a subset of with k elements and J is a subset of with k elements. Then
[AB]I,J=\sumK[A]I,K[B]K,J
A more systematic, algebraic treatment of minors is given in multilinear algebra, using the wedge product: the k-minors of a matrix are the entries in the kth exterior power map.
If the columns of a matrix are wedged together k at a time, the k × k minors appear as the components of the resulting k-vectors. For example, the 2 × 2 minors of the matrix
\begin{pmatrix} 1&4\\ 3&-1\\ 2&1\\ \end{pmatrix}
(e1+3e2+2e3)\wedge(4e1-e2+e3)
ei\wedgeei=0,
ei\wedgeej=-ej\wedgeei,
-13e1\wedgee2-7e1\wedgee3+5e2\wedgee3
In some books, instead of cofactor the term adjunct is used.[7] Moreover, it is denoted as Aij and defined in the same way as cofactor:
Aij=(-1)i+jMij
Using this notation the inverse matrix is written this way:
M-1=
1 | |
\det(M) |
\begin{bmatrix} A11&A21& … &An1\\ A12&A22& … &An2\\ \vdots&\vdots&\ddots&\vdots\\ A1n&A2n& … &Ann\end{bmatrix}
Keep in mind that adjunct is not adjugate or adjoint. In modern terminology, the "adjoint" of a matrix most often refers to the corresponding adjoint operator.