Cover's theorem is a statement in computational learning theory and is one of the primary theoretical motivations for the use of non-linear kernel methods in machine learning applications. It is so termed after the information theorist Thomas M. Cover who stated it in 1965, referring to it as counting function theorem.
The theorem expresses the number of homogeneously linearly separable sets of
N
D
C(N,D)
N
D
It requires, as a necessary and sufficient condition, that the points are in general position. Simply put, this means that the points should be as linearly independent (non-aligned) as possible. This condition is satisfied "with probability 1" or almost surely for random point sets, while it may easily be violated for real data, since these are often structured along smaller-dimensionality manifolds within the data space.
The function
C(N,D)
N
D
N\leD+1
N
N\leD+1
N>D+1
N
D
N
A consequence of the theorem is that given a set of training data that is not linearly separable, one can with high probability transform it into a training set that is linearly separable by projecting it into a higher-dimensional space via some non-linear transformation, or:
The proof of Cover's counting function theorem can be obtained from the recursive relation
C(N+1,D)=C(N,D)+C(N,D-1).
To show that, with fixed
N
D
N
N-1