In linear algebra, the adjugate of a square matrix is the transpose of its cofactor matrix and is denoted by .[1] [2] It is also occasionally known as adjunct matrix,[3] [4] or "adjoint",[5] though the latter term today normally refers to a different concept, the adjoint operator which for a matrix is the conjugate transpose.
The product of a matrix with its adjugate gives a diagonal matrix (entries not on the main diagonal are zero) whose diagonal entries are the determinant of the original matrix:
A\operatorname{adj}(A)=\det(A)I,
The adjugate of is the transpose of the cofactor matrix of,
\operatorname{adj}(A)=CT.
In more detail, suppose is a unital commutative ring and is an matrix with entries from . The -minor of, denoted, is the determinant of the matrix that results from deleting row and column of . The cofactor matrix of is the matrix whose entry is the cofactor of, which is the -minor times a sign factor:
C=\left((-1)i+jMij\right)1.
\operatorname{adj}(A)=CT=\left((-1)i+jMji\right)1.
The adjugate is defined so that the product of with its adjugate yields a diagonal matrix whose diagonal entries are the determinant . That is,
A\operatorname{adj}(A)=\operatorname{adj}(A)A=\det(A)I,
The above formula implies one of the fundamental results in matrix algebra, that is invertible if and only if is an invertible element of . When this holds, the equation above yields
\begin{align} \operatorname{adj}(A)&=\det(A)A-1,\\ A-1&=\det(A)-1\operatorname{adj}(A). \end{align}
Since the determinant of a 0 × 0 matrix is 1, the adjugate of any 1 × 1 matrix (complex scalar) is
I=\begin{bmatrix}1\end{bmatrix}
A\operatorname{adj}(A)=AI=(\detA)I.
The adjugate of the 2 × 2 matrix
A=\begin{bmatrix}a&b\ c&d\end{bmatrix}
\operatorname{adj}(A)=\begin{bmatrix}d&-b\ -c&a\end{bmatrix}.
A\operatorname{adj}(A)=\begin{bmatrix}ad-bc&0\ 0&ad-bc\end{bmatrix}=(\detA)I.
Consider a 3 × 3 matrix
A=\begin{bmatrix} a1&a2&a3\\ b1&b2&b3\\ c1&c2&c3\end{bmatrix}.
C=\begin{bmatrix} +\begin{vmatrix}b2&b3\ c2&c3\end{vmatrix}& -\begin{vmatrix}b1&b3\ c1&c3\end{vmatrix}& +\begin{vmatrix}b1&b2\ c1&c2\end{vmatrix}\\ \\ -\begin{vmatrix}a2&a3\ c2&c3\end{vmatrix}& +\begin{vmatrix}a1&a3\ c1&c3\end{vmatrix}& -\begin{vmatrix}a1&a2\ c1&c2\end{vmatrix}\\ \\ +\begin{vmatrix}a2&a3\ b2&b3\end{vmatrix}& -\begin{vmatrix}a1&a3\ b1&b3\end{vmatrix}& +\begin{vmatrix}a1&a2\ b1&b2\end{vmatrix} \end{bmatrix},
\begin{vmatrix}a&b\ c&d\end{vmatrix} =\det\begin{bmatrix}a&b\ c&d\end{bmatrix}.
Its adjugate is the transpose of its cofactor matrix,
\operatorname{adj}(A)=CT=\begin{bmatrix} +\begin{vmatrix}b2&b3\ c2&c3\end{vmatrix}& -\begin{vmatrix}a2&a3\ c2&c3\end{vmatrix}& +\begin{vmatrix}a2&a3\ b2&b3\end{vmatrix}\\ &&\\ -\begin{vmatrix}b1&b3\ c1&c3\end{vmatrix}& +\begin{vmatrix}a1&a3\ c1&c3\end{vmatrix}& -\begin{vmatrix}a1&a3\ b1&b3\end{vmatrix}\\ &&\\ +\begin{vmatrix}b1&b2\ c1&c2\end{vmatrix}& -\begin{vmatrix}a1&a2\ c1&c2\end{vmatrix}& +\begin{vmatrix}a1&a2\ b1&b2\end{vmatrix} \end{bmatrix}.
As a specific example, we have
\operatorname{adj}\begin{bmatrix} -3&2&-5\\ -1&0&-2\\ 3&-4&1 \end{bmatrix}=\begin{bmatrix} -8&18&-4\\ -5&12&-1\\ 4&-6&2 \end{bmatrix}.
The in the second row, third column of the adjugate was computed as follows. The (2,3) entry of the adjugate is the (3,2) cofactor of A. This cofactor is computed using the submatrix obtained by deleting the third row and second column of the original matrix A,
\begin{bmatrix}-3&-5\ -1&-2\end{bmatrix}.
(-1)3+2\operatorname{det}\begin{bmatrix}-3&-5\\-1&-2\end{bmatrix}=-(-3 ⋅ -2--5 ⋅ -1)=-1,
For any matrix, elementary computations show that adjugates have the following properties:
\operatorname{adj}(I)=I
I
\operatorname{adj}(0)=0
0
n=1
\operatorname{adj}(0)=I
\operatorname{adj}(cA)=cn\operatorname{adj}(A)
\operatorname{adj}(AT)=\operatorname{adj}(A)T
\det(\operatorname{adj}(A))=(\detA)n-1
\operatorname{adj}(A)=(\detA)A-1
Over the complex numbers,
\operatorname{adj}(\overlineA)=\overline{\operatorname{adj}(A)}
\operatorname{adj}(A*)=\operatorname{adj}(A)*
Suppose that is another matrix. Then
\operatorname{adj}(AB)=\operatorname{adj}(B)\operatorname{adj}(A).
\operatorname{adj}(B)\operatorname{adj}(A)=(\detB)B-1(\detA)A-1=(\detAB)(AB)-1=\operatorname{adj}(AB).
A corollary of the previous formula is that, for any non-negative integer,
\operatorname{adj}(Ak)=\operatorname{adj}(A)k.
From the identity
(A+B)\operatorname{adj}(A+B)B=\det(A+B)B=B\operatorname{adj}(A+B)(A+B),
A\operatorname{adj}(A+B)B=B\operatorname{adj}(A+B)A.
Suppose that commutes with . Multiplying the identity on the left and right by proves that
\det(A)\operatorname{adj}(A)B=\det(A)B\operatorname{adj}(A).
Finally, there is a more general proof than the second proof, which only requires that an n × n matrix has entries over a field with at least 2n + 1 elements (e.g. a 5 × 5 matrix over the integers modulo 11). is a polynomial in t with degree at most n, so it has at most n roots. Note that the ij th entry of is a polynomial of at most order n, and likewise for . These two polynomials at the ij th entry agree on at least n + 1 points, as we have at least n + 1 elements of the field where is invertible, and we have proven the identity for invertible matrices. Polynomials of degree n which agree on n + 1 points must be identical (subtract them from each other and you have n + 1 roots for a polynomial of degree at most n – a contradiction unless their difference is identically zero). As the two polynomials are identical, they take the same value for every value of t. Thus, they take the same value when t = 0.
Using the above properties and other elementary computations, it is straightforward to show that if has one of the following properties, then does as well:
If is skew-symmetric, then is skew-symmetric for even n and symmetric for odd n. Similarly, if is skew-Hermitian, then is skew-Hermitian for even n and Hermitian for odd n.
If is invertible, then, as noted above, there is a formula for in terms of the determinant and inverse of . When is not invertible, the adjugate satisfies different but closely related formulas.
See also: Cramer's rule.
Partition into column vectors:
A=\begin{bmatrix}a1& … &an\end{bmatrix}.
(A\stackrel{i}{\leftarrow}b) \stackrel{def
\left(\det(A\stackrel{i}{\leftarrow}
n | |
b)\right) | |
i=1 |
=\operatorname{adj}(A)b.
This formula has the following concrete consequence. Consider the linear system of equations
Ax=b.
x=
\operatorname{adj | |
(A)b |
xi=
\det(A\stackrel{i | |
\leftarrow |
b)}{\detA
Let the characteristic polynomial of be
p(s)=\det(sI-A)=
n | |
\sum | |
i=0 |
pisi\inR[s].
\Deltap(s,t)=
p(s)-p(t) | |
s-t |
=\sum0pj+k+1sjtk\inR[s,t].
\operatorname{adj}(sI-A)=\Deltap(sI,A).
In particular, the resolvent of is defined to be
R(z;A)=(zI-A)-1,
R(z;A)=
\Deltap(zI,A) | |
p(z) |
.
See main article: Jacobi's formula. The adjugate also appears in Jacobi's formula for the derivative of the determinant. If is continuously differentiable, then
d(\detA) | |
dt |
(t)=\operatorname{tr}\left(\operatorname{adj}(A(t))A'(t)\right).
d(\det
A) | |
A0 |
=
T | |
\operatorname{adj}(A | |
0) |
.
See main article: Cayley–Hamilton theorem. Let be the characteristic polynomial of . The Cayley–Hamilton theorem states that
pA(A)=0.
\operatorname{adj}(A)=
n-1 | |
\sum | |
s=0 |
As
\sum | |
k1,k2,\ldots,kn-1 |
n-1 | |
\prod | |
\ell=1 |
| ||||||
|
\operatorname{tr}(A\ell)
k\ell | |
,
n-1 | |
s+\sum | |
\ell=1 |
\ellk\ell=n-1.
For the 2 × 2 case, this gives
\operatorname{adj}(A)=I2(\operatorname{tr}A)-A.
\operatorname{adj}(A)= | 1 |
2 |
I3\left((\operatorname{tr}A)2-\operatorname{tr}A2\right)-A(\operatorname{tr}A)+A2.
\operatorname{adj}(A)= | 1 |
6 |
I4\left((\operatorname{tr}A)3 -3\operatorname{tr}A\operatorname{tr}A2 +2\operatorname{tr}A3\right) -
1 | |
2 |
A\left((\operatorname{tr}A)2-\operatorname{tr}A2\right) +A2(\operatorname{tr}A) -A3.
The same formula follows directly from the terminating step of the Faddeev–LeVerrier algorithm, which efficiently determines the characteristic polynomial of .
In generally, adjugate matrix of arbitrary dimension N matrix can be computed by Einstein's convention.
jN | |
(\operatorname{adj}(A)) | |
iN |
=
1 | |
(N-1)! |
\epsilon | |
i1i2\ldotsiN |
j1j2\ldotsjN | |
\epsilon |
i1 | |
A | |
j1 |
i2 | |
A | |
j2 |
\ldots
iN-1 | |
A | |
jN-1 |
The adjugate can be viewed in abstract terms using exterior algebras. Let be an -dimensional vector space. The exterior product defines a bilinear pairing
V x \wedgen-1V\to\wedgenV.
\wedgenV
\phi\colonV \xrightarrow{\cong} \operatorname{Hom}(\wedgen-1V,\wedgenV).
\phiv
\phiv(\alpha)=v\wedge\alpha.
V \xrightarrow{\phi} \operatorname{Hom}(\wedgen-1V,\wedgenV) \xrightarrow{(\wedgen-1T)*} \operatorname{Hom}(\wedgen-1V,\wedgenV) \xrightarrow{\phi-1
If is endowed with its canonical basis, and if the matrix of in this basis is, then the adjugate of is the adjugate of . To see why, give
\wedgen-1Rn
\{e1\wedge...\wedge\hatek\wedge...\wedgeen\}
n. | |
k=1 |
\phi
\phi | |
ei |
(e1\wedge...\wedge\hatek\wedge...\wedgeen) =\begin{cases}(-1)i-1e1\wedge...\wedgeen,&if k=i,\ 0&otherwise.\end{cases}
e1\wedge...\wedge\hatej\wedge...\wedgeen\mapsto
n | |
\sum | |
k=1 |
(\detAjk)e1\wedge...\wedge\hatek\wedge...\wedgeen.
\phi | |
ei |
\phi | |
ei |
e1\wedge...\wedge\hatej\wedge...\wedgeen\mapsto(-1)i-1(\detAji)e1\wedge...\wedgeen,
n | |
\sum | |
j=1 |
(-1)i+j(\detAji
)\phi | |
ej |
.
\phi
ei\mapsto
n | |
\sum | |
j=1 |
(-1)i+j(\detAji)ej.
If is endowed with an inner product and a volume form, then the map can be decomposed further. In this case, can be understood as the composite of the Hodge star operator and dualization. Specifically, if is the volume form, then it, together with the inner product, determines an isomorphism
\omega\vee\colon\wedgenV\toR.
\operatorname{Hom}(\wedgen-1Rn,\wedgenRn)\cong\wedgen-1(Rn)\vee.
(\alpha\mapsto\omega\vee(v\wedge\alpha))\in\wedgen-1(Rn)\vee.
Let be an matrix, and fix . The th higher adjugate of is an matrix, denoted, whose entries are indexed by size subsets and of