In mathematics, given a field
F
m,n
A\inFm x
C\inFm x
F\inFr x
r=\operatorname{rank}A
A
Every finite-dimensional matrix has a rank decomposition: Let be an matrix whose column rank is . Therefore, there are linearly independent columns in ; equivalently, the dimension of the column space of is . Let be any basis for the column space of and place them as column vectors to form the matrix . Therefore, every column vector of is a linear combination of the columns of . To be precise, if is an matrix with as the -th column, then
aj=f1jc1+f2jc2+ … +frjcr,
where 's are the scalar coefficients of in terms of the basis . This implies that , where is the -th element of .
If is a rank factorization, taking and gives another rank factorization for any invertible matrix of compatible dimensions.
Conversely, if are two rank factorizations of , then there exists an invertible matrix such that and .[1]
In practice, we can construct one specific rank factorization as follows: we can compute , the reduced row echelon form of . Then is obtained by removing from all non-pivot columns (which can be determined by looking for columns in which do not contain a pivot), and is obtained by eliminating any all-zero rows of .
Note: For a full-rank square matrix (i.e. when ), this procedure will yield the trivial result and (the identity matrix).
Consider the matrix
A=\begin{bmatrix}1&3&1&4\ 2&7&3&9\ 1&5&3&1\ 1&2&0&8\end{bmatrix} \sim\begin{bmatrix}1&0&-2&0\ 0&1&1&0\ 0&0&0&1\ 0&0&0&0\end{bmatrix}=B.
is in reduced echelon form.
Then is obtained by removing the third column of , the only one which is not a pivot column, and by getting rid of the last row of zeroes from , so
C=\begin{bmatrix}1&3&4\ 2&7&9\ 1&5&1\ 1&2&8\end{bmatrix}, F=\begin{bmatrix}1&0&-2&0\ 0&1&1&0\ 0&0&0&1\end{bmatrix}.
It is straightforward to check that
A=\begin{bmatrix}1&3&1&4\ 2&7&3&9\ 1&5&3&1\ 1&2&0&8\end{bmatrix} =\begin{bmatrix}1&3&4\ 2&7&9\ 1&5&1\ 1&2&8\end{bmatrix} \begin{bmatrix}1&0&-2&0\ 0&1&1&0\ 0&0&0&1\end{bmatrix} =CF.
Let be an permutation matrix such that in block partitioned form, where the columns of are the pivot columns of . Every column of is a linear combination of the columns of , so there is a matrix such that , where the columns of contain the coefficients of each of those linear combinations. So , being the identity matrix. We will show now that .
Transforming into its reduced row echelon form amounts to left-multiplying by a matrix which is a product of elementary matrices, so , where . We then can write , which allows us to identify , i.e. the nonzero rows of the reduced echelon form, with the same permutation on the columns as we did for . We thus have , and since is invertible this implies , and the proof is complete.
If
F\in\{R,C\},
A=U\SigmaV* =\begin{bmatrix}U1&U2\end{bmatrix}\begin{bmatrix}\Sigmar&0\ 0&0\end{bmatrix}\begin{bmatrix}
* | |
V | |
1 |
* | |
\ V | |
2 |
\end{bmatrix} =U1\left(\Sigmar
*\right) | |
V | |
1 |
.
Since is a full-column-rank matrix and is a full-row-rank matrix, we can take and .
An immediate consequence of rank factorization is that the rank of is equal to the rank of its transpose . Since the columns of are the rows of , the column rank of equals its row rank.
Proof: To see why this is true, let us first define rank to mean column rank. Since , it follows that . From the definition of matrix multiplication, this means that each column of is a linear combination of the columns of . Therefore, the column space of is contained within the column space of and, hence, .
Now, is , so there are columns in and, hence, . This proves that .
Now apply the result to to obtain the reverse inequality: since , we can write . This proves .
We have, therefore, proved and , so .