In machine learning, feature hashing, also known as the hashing trick (by analogy to the kernel trick), is a fast and space-efficient way of vectorizing features, i.e. turning arbitrary features into indices in a vector or matrix.[1] It works by applying a hash function to the features and using their hash values as indices directly (after a modulo operation), rather than looking the indices up in an associative array. In addition to its use for encoding non-numeric values, feature hashing can also be used for dimensionality reduction.
This trick is often attributed to Weinberger et al. (2009), but there exists a much earlier description of this method published by John Moody in 1989.
In a typical document classification task, the input to the machine learning algorithm (both during learning and classification) is free text. From this, a bag of words (BOW) representation is constructed: the individual tokens are extracted and counted, and each distinct token in the training set defines a feature (independent variable) of each of the documents in both the training and test sets.
Machine learning algorithms, however, are typically defined in terms of numerical vectors. Therefore, the bags of words for a set of documents is regarded as a term-document matrix where each row is a single document, and each column is a single feature/word; the entry in such a matrix captures the frequency (or weight) of the 'th term of the vocabulary in document . (An alternative convention swaps the rows and columns of the matrix, but this difference is immaterial.)Typically, these vectors are extremely sparse—according to Zipf's law.
The common approach is to construct, at learning time or prior to that, a dictionary representation of the vocabulary of the training set, and use that to map words to indices. Hash tables and tries are common candidates for dictionary implementation. E.g., the three documents
can be converted, using the dictionary
Term | Index | |
---|---|---|
John | 1 | |
likes | 2 | |
to | 3 | |
watch | 4 | |
movies | 5 | |
Mary | 6 | |
too | 7 | |
also | 8 | |
football | 9 |
to the term-document matrix
\begin{pmatrix} rm{John}&rm{likes}&rm{to}&rm{watch}&rm{movies}&rm{Mary}&rm{too}&rm{also}&rm{football}\\ 1&1&1&1&1&0&0&0&0\\ 0&1&0&0&1&1&1&0&0\\ 1&1&0&0&0&0&0&1&1 \end{pmatrix}
(Punctuation was removed, as is usual in document classification and clustering.)
The problem with this process is that such dictionaries take up a large amount of storage space and grow in size as the training set grows.[2] On the contrary, if the vocabulary is kept fixed and not increased with a growing training set, an adversary may try to invent new words or misspellings that are not in the stored vocabulary so as to circumvent a machine learned filter. To address this challenge, Yahoo! Research attempted to use feature hashing for their spam filters.[3]
Note that the hashing trick isn't limited to text classification and similar tasks at the document level, but can be applied to any problem that involves large (perhaps unbounded) numbers of features.
Mathematically, a token is an element
t
T
T
T
T
Most neural networks can only operate on real vector inputs, so we must construct a "dictionary" function
\phi:T\to\Rn
When
T
|T|=m\leqn
\Rn
T=\{t1,t2,..,tm\}
\phi(ti)=ei
i
i
ei
One-hot encoding is easy to interpret, but it requires one to maintain the arbitrary enumeration of
T
t\inT
\phi(t)
i
t
\phi
h:T\to\{1,...,m\}
\phi(t)=eh(t)
In fact, we can relax the requirement slightly: It suffices to have a fast-to-compute injection
h:T\to\{1,...,n\}
\phi(t)=eh(t)
In practice, there is no simple way to construct an efficient injection
h:T\to\{1,...,n\}
t ≠ t'
h(t) ≠ h(t')
\phi(t) ≠ \phi(t')
At this point, we have just specified that
h
The basic feature hashing algorithm presented in (Weinberger et al. 2009) is defined as follows.
First, one specifies two hash functions: the kernel hash
h:T\to\{1,2,...,n\}
\zeta:T\to\{-1,+1\}
T*
T
Equivalently,
We want to say something about the geometric property of
\phi
T
T\to\RT
\phi
\phi:T\to\Rn
\phi:\RT\to\Rn
\RT
\forall(xt)t\in\in\RT
(xt)t\in
Define an inner product on
\RT
T
\RT
Now we have an inner product space, with enough structure to describe the geometry of the feature hashing function
\phi:\RT\to\Rn
First, we can see why
h
K:T x T\to\R
K
\phi(t)=\zeta(t)eh(t)
K\zeta:T x T\to\R
h
\zeta
\phi
The above statement and proof interprets the binary hash function
\zeta
T\to\{-1,+1\}
\{-1,+1\}T
Pr(\zeta(t)=+1)=Pr(\zeta(t)=-1)=
1 | |
2 |
t\inT
This is a good intuitive picture, though not rigorous. For a rigorous statement and proof, see
Instead of maintaining a dictionary, a feature vectorizer that uses the hashing trick can build a vector of a pre-defined length by applying a hash function to the features (e.g., words), then using the hash values directly as feature indices and updating the resulting vector at those indices. Here, we assume that feature actually means feature vector.
h(xf)=1
xf
2
xf
The above pseudocode actually converts each sample into a vector. An optimized version would instead only generate a stream of
(h,\zeta)
Feature hashing generally suffers from hash collision, which means that there exist pairs of different tokens with the same hash:
t ≠ t',\phi(t)=\phi(t')=v
t
t'
v
If
t'
v
t
To handle this, one can train supervised hashing functions that avoids mapping common tokens to the same feature vectors.[5]
Ganchev and Dredze showed that in text classification applications with random hash functions and several tens of thousands of columns in the output vectors, feature hashing need not have an adverse effect on classification performance, even without the signed hash function.[2]
Weinberger et al. (2009) applied their version of feature hashing to multi-task learning, and in particular, spam filtering, where the input features are pairs (user, feature) so that a single parameter vector captured per-user spam filters as well as a global filter for several hundred thousand users, and found that the accuracy of the filter went up.
Chen et al. (2015) combined the idea of feature hashing and sparse matrix to construct "virtual matrices": large matrices with small storage requirements. The idea is to treat a matrix
M\in\Rn x
n x n
\R
h:\N x \N\tom
\Rm
n
Implementations of the hashing trick are present in: