In linear algebra, an idempotent matrix is a matrix which, when multiplied by itself, yields itself.[1] [2] That is, the matrix
A
A2=A
A2
A
Examples of
2 x 2
Examples of
3 x 3
If a matrix
\begin{pmatrix}a&b\ c&d\end{pmatrix}
a=a2+bc,
b=ab+bd,
b(1-a-d)=0
b=0
d=1-a,
c=ca+cd,
c(1-a-d)=0
c=0
d=1-a,
d=bc+d2.
Thus, a necessary condition for a
2 x 2
a
d
If
b=c
\begin{pmatrix}a&b\ b&1-a\end{pmatrix}
a2+b2=a,
a2-a+b2=0,
\left(a-
1 | |
2 |
\right)2+b2=
1 | |
4 |
which is a circle with center (1/2, 0) and radius 1/2. In terms of an angle θ,
A=
1 | |
2 |
\begin{pmatrix}1-\cos\theta&\sin\theta\ \sin\theta&1+\cos\theta\end{pmatrix}
However,
b=c
\begin{pmatrix}a&b\ c&1-a\end{pmatrix}
a2+bc=a
The only non-singular idempotent matrix is the identity matrix; that is, if a non-identity matrix is idempotent, its number of independent rows (and columns) is less than its number of rows (and columns).
This can be seen from writing
A2=A
A-1
A=IA=A-1A2=A-1A=I
When an idempotent matrix is subtracted from the identity matrix, the result is also idempotent. This holds since
(I-A)(I-A)=I-A-A+A2=I-A-A+A=I-A.
If a matrix is idempotent then for all positive integers n,
An=A
n=1
A1=A
Ak-1=A
Ak=Ak-1A=AA=A
An idempotent matrix is always diagonalizable.[3] Its eigenvalues are either 0 or 1: if
x
A
λ
λ\in\{0,1\}.
The trace of an idempotent matrix — the sum of the elements on its main diagonal — equals the rank of the matrix and thus is always an integer. This provides an easy way of computing the rank, or alternatively an easy way of determining the trace of a matrix whose elements are not specifically known (which is helpful in statistics, for example, in establishing the degree of bias in using a sample variance as an estimate of a population variance).
In regression analysis, the matrix
M=I-X(X'X)-1X'
e
y
X
X1
X
M1=I-X1(X1'X
-1 | |
1) |
X1'
M
M1
MM1=M
MX1=0
X1
X
X1
X
MX=0
(M1-M)
(M1-M)M=0
(M1-M)
M
Any similar matrices of an idempotent matrix are also idempotent. Idempotency is conserved under a change of basis. This can be shown through multiplication of the transformed matrix with
A
(SAS-1)2=(SAS-1)(SAS-1)=SA(S-1S)AS-1=SA2S-1=SAS-1
Idempotent matrices arise frequently in regression analysis and econometrics. For example, in ordinary least squares, the regression problem is to choose a vector of coefficient estimates so as to minimize the sum of squared residuals (mispredictions) ei: in matrix form,
Minimize
(y-X\beta)sf{T}(y-X\beta)
where
y
X
\hat\beta=\left(Xsf{T}X\right)-1Xsf{T}y
where superscript T indicates a transpose, and the vector of residuals is[2]
\hat{e}=y-X\hat\beta =y-X\left(Xsf{T}X\right)-1Xsf{T}y =\left[I-X\left(Xsf{T}X\right)-1Xsf{T}\right]y =My.
Here both
M
X\left(Xsf{T}X\right)-1Xsf{T}
\hat{e}sf{T}\hat{e}=(My)sf{T}(My)=ysf{T}Msf{T}My=ysf{T}MMy=ysf{T}My.
The idempotency of
M
\hat{\beta}
An idempotent linear operator
P
P