In linear algebra, the column space (also called the range or image) of a matrix A is the span (set of all possible linear combinations) of its column vectors. The column space of a matrix is the image or range of the corresponding matrix transformation.
Let
F
F
Fm
R
The row space is defined similarly.
The row space and the column space of a matrix are sometimes denoted as and respectively.[2]
\Rn
\Rm
Let be an -by- matrix. Then
If the matrix represents a linear transformation, the column space of the matrix equals the image of this linear transformation.
The column space of a matrix is the set of all linear combinations of the columns in . If, then .
Given a matrix, the action of the matrix on a vector returns a linear combination of the columns of with the coordinates of as coefficients; that is, the columns of the matrix generate the column space.
Given a matrix :
J= \begin{bmatrix} 2&4&1&3&2\\ -1&-2&1&0&5\\ 1&6&2&2&2\\ 3&6&2&5&1 \end{bmatrix}
r1=\begin{bmatrix}2&4&1&3&2\end{bmatrix}
r2=\begin{bmatrix}-1&-2&1&0&5\end{bmatrix}
r3=\begin{bmatrix}1&6&2&2&2\end{bmatrix}
r4=\begin{bmatrix}3&6&2&5&1\end{bmatrix}
\R5
\R5
Let be a field of scalars. Let be an matrix, with column vectors . A linear combination of these vectors is any vector of the form
c1v1+c2v2+ … +cnvn,
Any linear combination of the column vectors of a matrix can be written as the product of with a column vector:
\begin{array}{rcl} A\begin{bmatrix}c1\ \vdots\ cn\end{bmatrix}&=&\begin{bmatrix}a11& … &a1n\ \vdots&\ddots&\vdots\ am1& … &amn\end{bmatrix}\begin{bmatrix}c1\ \vdots\ cn\end{bmatrix} =\begin{bmatrix}c1a11+ … +cna1n\ \vdots\ c1am1+ … +cnamn\end{bmatrix}=c1\begin{bmatrix}a11\ \vdots\ am1\end{bmatrix}+ … +cn\begin{bmatrix}a1n\ \vdots\ amn\end{bmatrix}\\ &=&c1v1+ … +cnvn \end{array}
Therefore, the column space of consists of all possible products, for . This is the same as the image (or range) of the corresponding matrix transformation.
If
A=\begin{bmatrix}1&0\ 0&1\ 2&0\end{bmatrix}
The columns of span the column space, but they may not form a basis if the column vectors are not linearly independent. Fortunately, elementary row operations do not affect the dependence relations between the column vectors. This makes it possible to use row reduction to find a basis for the column space.
For example, consider the matrix
A=\begin{bmatrix}1&3&1&4\ 2&7&3&9\ 1&5&3&1\ 1&2&0&8\end{bmatrix}.
\begin{bmatrix}1&3&1&4\ 2&7&3&9\ 1&5&3&1\ 1&2&0&8\end{bmatrix} \sim\begin{bmatrix}1&3&1&4\ 0&1&1&1\ 0&2&2&-3\ 0&-1&-1&4\end{bmatrix} \sim\begin{bmatrix}1&0&-2&1\ 0&1&1&1\ 0&0&0&-5\ 0&0&0&5\end{bmatrix} \sim\begin{bmatrix}1&0&-2&0\ 0&1&1&0\ 0&0&0&1\ 0&0&0&0\end{bmatrix}.
\begin{bmatrix}1\ 2\ 1\ 1\end{bmatrix}, \begin{bmatrix}3\ 7\ 5\ 2\end{bmatrix}, \begin{bmatrix}4\ 9\ 1\ 8\end{bmatrix}.
The above algorithm can be used in general to find the dependence relations between any set of vectors, and to pick out a basis from any spanning set. Also finding a basis for the column space of is equivalent to finding a basis for the row space of the transpose matrix .
To find the basis in a practical setting (e.g., for large matrices), the singular-value decomposition is typically used.
See main article: Rank (linear algebra). The dimension of the column space is called the rank of the matrix. The rank is equal to the number of pivots in the reduced row echelon form, and is the maximum number of linearly independent columns that can be chosen from the matrix. For example, the 4 × 4 matrix in the example above has rank three.
Because the column space is the image of the corresponding matrix transformation, the rank of a matrix is the same as the dimension of the image. For example, the transformation
\R4\to\R4
\R4
The nullity of a matrix is the dimension of the null space, and is equal to the number of columns in the reduced row echelon form that do not have pivots.[4] The rank and nullity of a matrix with columns are related by the equation:
\operatorname{rank}(A)+\operatorname{nullity}(A)=n.
The left null space of is the set of all vectors such that . It is the same as the null space of the transpose of . The product of the matrix and the vector can be written in terms of the dot product of vectors:
ATx=\begin{bmatrix}v1 ⋅ x\ v2 ⋅ x\ \vdots\ vn ⋅ x\end{bmatrix},
It follows that the left null space (the null space of) is the orthogonal complement to the column space of .
For a matrix, the column space, row space, null space, and left null space are sometimes referred to as the four fundamental subspaces.
Similarly the column space (sometimes disambiguated as right column space) can be defined for matrices over a ring as
n | |
\sum\limits | |
k=1 |
vkck
Let be a field of scalars. Let be an matrix, with row vectors . A linear combination of these vectors is any vector of the form
c1r1+c2r2+ … +cmrm,
For example, if
A=\begin{bmatrix}1&0&2\ 0&1&0\end{bmatrix},
c1\begin{bmatrix}1&0&2\end{bmatrix}+c2\begin{bmatrix}0&1&0\end{bmatrix}=\begin{bmatrix}c1&c2&2c1\end{bmatrix}.
For a matrix that represents a homogeneous system of linear equations, the row space consists of all linear equations that follow from those in the system.
The column space of is equal to the row space of .
The row space is not affected by elementary row operations. This makes it possible to use row reduction to find a basis for the row space.
For example, consider the matrix
A=\begin{bmatrix}1&3&2\ 2&7&4\ 1&5&2\end{bmatrix}.
,, represents the rows.
\begin{align} \begin{bmatrix}1&3&2\ 2&7&4\ 1&5&2\end{bmatrix} &\xrightarrow{r2-2r1\tor2} \begin{bmatrix}1&3&2\ 0&1&0\ 1&5&2\end{bmatrix} \xrightarrow{r3-r1\tor3} \begin{bmatrix}1&3&2\ 0&1&0\ 0&2&0\end{bmatrix}\\ &\xrightarrow{r3-2r2\tor3} \begin{bmatrix}1&3&2\ 0&1&0\ 0&0&0\end{bmatrix} \xrightarrow{r1-3r2\tor1} \begin{bmatrix}1&0&2\ 0&1&0\ 0&0&0\end{bmatrix}. \end{align}
This algorithm can be used in general to find a basis for the span of a set of vectors. If the matrix is further simplified to reduced row echelon form, then the resulting basis is uniquely determined by the row space.
It is sometimes convenient to find a basis for the row space from among the rows of the original matrix instead (for example, this result is useful in giving an elementary proof that the determinantal rank of a matrix is equal to its rank). Since row operations can affect linear dependence relations of the row vectors, such a basis is instead found indirectly using the fact that the column space of is equal to the row space of . Using the example matrix above, find and reduce it to row echelon form:
AT=\begin{bmatrix}1&2&1\ 3&7&5\ 2&4&2\end{bmatrix}\sim\begin{bmatrix}1&2&1\ 0&1&2\ 0&0&0\end{bmatrix}.
The pivots indicate that the first two columns of form a basis of the column space of . Therefore, the first two rows of (before any row reductions) also form a basis of the row space of .
See main article: Rank (linear algebra). The dimension of the row space is called the rank of the matrix. This is the same as the maximum number of linearly independent rows that can be chosen from the matrix, or equivalently the number of pivots. For example, the 3 × 3 matrix in the example above has rank two.[6]
The rank of a matrix is also equal to the dimension of the column space. The dimension of the null space is called the nullity of the matrix, and is related to the rank by the following equation:
\operatorname{rank}(A)+\operatorname{nullity}(A)=n,
The null space of matrix is the set of all vectors for which . The product of the matrix and the vector can be written in terms of the dot product of vectors:
Ax=\begin{bmatrix}r1 ⋅ x\ r2 ⋅ x\ \vdots\ rm ⋅ x\end{bmatrix},
It follows that the null space of is the orthogonal complement to the row space. For example, if the row space is a plane through the origin in three dimensions, then the null space will be the perpendicular line through the origin. This provides a proof of the rank–nullity theorem (see dimension above).
The row space and null space are two of the four fundamental subspaces associated with a matrix (the other two being the column space and left null space).
If and are vector spaces, then the kernel of a linear transformation is the set of vectors for which . The kernel of a linear transformation is analogous to the null space of a matrix.
If is an inner product space, then the orthogonal complement to the kernel can be thought of as a generalization of the row space. This is sometimes called the coimage of . The transformation is one-to-one on its coimage, and the coimage maps isomorphically onto the image of .
When is not an inner product space, the coimage of can be defined as the quotient space .