In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the part of the domain which is mapped to the zero vector of the co-domain; the kernel is always a linear subspace of the domain.[1] That is, given a linear map between two vector spaces and, the kernel of is the vector space of all elements of such that, where denotes the zero vector in,[2] or more symbolically:
The kernel of is a linear subspace of the domain .[3] In the linear map
L:V\toW,
From this, it follows that the image of is isomorphic to the quotient of by the kernel:In the case where is finite-dimensional, this implies the rank–nullity theorem:where the term refers to the dimension of the image of,
\dim(\operatorname{im}L),
\dim(\kerL).
When is an inner product space, the quotient
V/\ker(L)
\ker(L)
See main article: Module (mathematics). The notion of kernel also makes sense for homomorphisms of modules, which are generalizations of vector spaces where the scalars are elements of a ring, rather than a field. The domain of the mapping is a module, with the kernel constituting a submodule. Here, the concepts of rank and nullity do not necessarily apply.
See main article: Topological vector space. If and are topological vector spaces such that is finite-dimensional, then a linear operator is continuous if and only if the kernel of is a closed subspace of .
Consider a linear map represented as a matrix with coefficients in a field (typically
R
C
The kernel of a matrix over a field is a linear subspace of . That is, the kernel of, the set, has the following three properties:
See main article: Rank–nullity theorem. The product Ax can be written in terms of the dot product of vectors as follows:
Here, denote the rows of the matrix . It follows that is in the kernel of, if and only if is orthogonal (or perpendicular) to each of the row vectors of (since orthogonality is defined as having a dot product of 0).
The row space, or coimage, of a matrix is the span of the row vectors of . By the above reasoning, the kernel of is the orthogonal complement to the row space. That is, a vector lies in the kernel of, if and only if it is perpendicular to every vector in the row space of .
The dimension of the row space of is called the rank of A, and the dimension of the kernel of is called the nullity of . These quantities are related by the rank–nullity theorem
The left null space, or cokernel, of a matrix consists of all column vectors such that, where T denotes the transpose of a matrix. The left null space of is the same as the kernel of . The left null space of is the orthogonal complement to the column space of, and is dual to the cokernel of the associated linear transformation. The kernel, the row space, the column space, and the left null space of are the four fundamental subspaces associated with the matrix .
The kernel also plays a role in the solution to a nonhomogeneous system of linear equations:If and are two possible solutions to the above equation, thenThus, the difference of any two solutions to the equation lies in the kernel of .
It follows that any solution to the equation can be expressed as the sum of a fixed solution and an arbitrary element of the kernel. That is, the solution set to the equation isGeometrically, this says that the solution set to is the translation of the kernel of by the vector . See also Fredholm alternative and flat (geometry).
The following is a simple illustration of the computation of the kernel of a matrix (see, below for methods better suited to more complex calculations). The illustration also touches on the row space and its relation to the kernel.
Consider the matrixThe kernel of this matrix consists of all vectors for whichwhich can be expressed as a homogeneous system of linear equations involving,, and :
The same linear equations can also be written in matrix form as:
Through Gauss–Jordan elimination, the matrix can be reduced to:
Rewriting the matrix in equation form yields:
The elements of the kernel can be further expressed in parametric vector form, as follows:
Since is a free variable ranging over all real numbers, this can be expressed equally well as:The kernel of is precisely the solution set to these equations (in this case, a line through the origin in). Here, since the vector constitutes a basis of the kernel of . The nullity of is 1.
The following dot products are zero:which illustrates that vectors in the kernel of are orthogonal to each of the row vectors of .
These two (linearly independent) row vectors span the row space of —a plane orthogonal to the vector .
With the rank 2 of, the nullity 1 of, and the dimension 3 of, we have an illustration of the rank-nullity theorem.
2x_1 &\;+\;& 3x_2 &\;+\;& 5x_3 &\;=\;& 0 \\ -4x_1 &\;+\;& 2x_2 &\;+\;& 3x_3 &\;=\;& 0\end
A basis of the kernel of a matrix may be computed by Gaussian elimination.
\begin{bmatrix}A\ \hlineI\end{bmatrix},
Computing its column echelon form by Gaussian elimination (or any other suitable method), we get a matrix
\begin{bmatrix}B\\\hlineC\end{bmatrix}.
In fact, the computation may be stopped as soon as the upper matrix is in column echelon form: the remainder of the computation consists in changing the basis of the vector space generated by the columns whose upper part is zero.
For example, suppose thatThen
Putting the upper part in column echelon form by column operations on the whole matrix gives
The last three columns of are zero columns. Therefore, the three last vectors of,are a basis of the kernel of .
Proof that the method computes the kernel: Since column operations correspond to post-multiplication by invertible matrices, the fact that
\begin{bmatrix}A\ \hlineI\end{bmatrix}
\begin{bmatrix}B\ \hlineC\end{bmatrix}
P
\begin{bmatrix}A\ \hlineI\end{bmatrix}P=\begin{bmatrix}B\ \hlineC\end{bmatrix},
B
v
A
Av=0
Bw=0,
B
w
v=Cw
The problem of computing the kernel on a computer depends on the nature of the coefficients.
If the coefficients of the matrix are exactly given numbers, the column echelon form of the matrix may be computed with Bareiss algorithm more efficiently than with Gaussian elimination. It is even more efficient to use modular arithmetic and Chinese remainder theorem, which reduces the problem to several similar ones over finite fields (this avoids the overhead induced by the non-linearity of the computational complexity of integer multiplication).
For coefficients in a finite field, Gaussian elimination works well, but for the large matrices that occur in cryptography and Gröbner basis computation, better algorithms are known, which have roughly the same computational complexity, but are faster and behave better with modern computer hardware.
For matrices whose entries are floating-point numbers, the problem of computing the kernel makes sense only for matrices such that the number of rows is equal to their rank: because of the rounding errors, a floating-point matrix has almost always a full rank, even when it is an approximation of a matrix of a much smaller rank. Even for a full-rank matrix, it is possible to compute its kernel only if it is well conditioned, i.e. it has a low condition number.[5]
Even for a well conditioned full rank matrix, Gaussian elimination does not behave correctly: it introduces rounding errors that are too large for getting a significant result. As the computation of the kernel of a matrix is a special instance of solving a homogeneous system of linear equations, the kernel may be computed with any of the various algorithms designed to solve homogeneous systems. A state of the art software for this purpose is the Lapack library.