In information retrieval, tf–idf (also TF*IDF, TFIDF, TF–IDF, or Tf–idf), short for term frequency–inverse document frequency, is a measure of importance of a word to a document in a collection or corpus, adjusted for the fact that some words appear more frequently in general.[1] Like the bag-of-words model, it models a document as a multiset of words, without word order. It is a refinement over the simple bag-of-words model, by allowing the weight of words to depend on the rest of the corpus.
It was often used as a weighting factor in searches of information retrieval, text mining, and user modeling. A survey conducted in 2015 showed that 83% of text-based recommender systems in digital libraries used tf–idf.[2] Variations of the tf–idf weighting scheme were often used by search engines as a central tool in scoring and ranking a document's relevance given a user query.
One of the simplest ranking functions is computed by summing the tf–idf for each query term; many more sophisticated ranking functions are variants of this simple model.
Karen Spärck Jones (1972) conceived a statistical interpretation of term-specificity called Inverse Document Frequency (idf), which became a cornerstone of term weighting:[3] For example, the df (document frequency) and idf for some words in Shakespeare's 37 plays are as follows:[4]
Romeo | 1 | 1.57 | |
salad | 2 | 1.27 | |
Falstaff | 4 | 0.967 | |
forest | 12 | 0.489 | |
battle | 21 | 0.246 | |
wit | 34 | 0.037 | |
fool | 36 | 0.012 | |
good | 37 | 0 | |
sweet | 37 | 0 |
We see that "Romeo", "Falstaff", and "salad" appears in very few plays, so seeing these words, one could get a good idea as to which play it might be. In contrast, "good" and "sweet" appears in every play and are completely uninformative as to which play it is.
binary | {0,1} | ||||
raw count | ft,d | ||||
term frequency | ft,d/{\sumt'{ft',d | ||||
log normalization | log(1+ft,d) | ||||
double normalization 0.5 | 0.5+0.5 ⋅
{ft',d | ||||
double normalization K | K+(1-K)
{ft',d |
Term frequency,, is the relative frequency of term within document,
tf(t,d)=
ft,d | |
{\sumt'{ft',d |
tf(t,d)=0.5+0.5 ⋅
ft, | |
max\{ft',:t'\ind\ |
) | |||||||
unary | 1 | ||||||
inverse document frequency | log
=-log
| ||||||
inverse document frequency smooth | log\left(
\right)+1 | ||||||
inverse document frequency max | log\left(
t' | ||||||
probabilistic inverse document frequency | log
|
The inverse document frequency is a measure of how much information the word provides, i.e., how common or rare it is across all documents. It is the logarithmically scaled inverse fraction of the documents that contain the word (obtained by dividing the total number of documents by the number of documents containing the term, and then taking the logarithm of that quotient):
idf(t,D)=log
N | |
|\{d:d\inDandt\ind\ |
|}
with
N
N={|D|}
|\{d\inD:t\ind\}|
t
tf(t,d) ≠ 0
1+N
1+|\{d\inD:t\ind\}|
count-idf | ft,d ⋅ log
| ||||||
double normalization-idf | \left(0.5+0.5
\right) ⋅ log
| ||||||
log normalization-idf | (1+logft,d) ⋅ log
|
Then tf–idf is calculated as
tfidf(t,d,D)=tf(t,d) ⋅ idf(t,D)
A high weight in tf–idf is reached by a high term frequency (in the given document) and a low document frequency of the term in the whole collection of documents; the weights hence tend to filter out common terms. Since the ratio inside the idf's log function is always greater than or equal to 1, the value of idf (and tf–idf) is greater than or equal to 0. As a term appears in more documents, the ratio inside the logarithm approaches 1, bringing the idf and tf–idf closer to 0.
Idf was introduced as "term specificity" by Karen Spärck Jones in a 1972 paper. Although it has worked well as a heuristic, its theoretical foundations have been troublesome for at least three decades afterward, with many researchers trying to find information theoretic justifications for it.[7]
Spärck Jones's own explanation did not propose much theory, aside from a connection to Zipf's law. Attempts have been made to put idf on a probabilistic footing,[8] by estimating the probability that a given document contains a term as the relative document frequency,
P(t|D)=
|\{d\inD:t\ind\ | |
|}{N}, |
so that we can define idf as
\begin{align} idf&=-logP(t|D)\\ &=log
1 | |
P(t|D) |
\\ &=log
N | |
|\{d\inD:t\ind\ |
|} \end{align}
Namely, the inverse document frequency is the logarithm of "inverse" relative document frequency.
This probabilistic interpretation in turn takes the same form as that of self-information. However, applying such information-theoretic notions to problems in information retrieval leads to problems when trying to define the appropriate event spaces for the required probability distributions: not only documents need to be taken into account, but also queries and terms.
Both term frequency and inverse document frequency can be formulated in terms of information theory; it helps to understand why their product has a meaning in terms of joint informational content of a document. A characteristic assumption about the distribution
p(d,t)
p(d|t)=
1 | |
|\{d\inD:t\ind\ |
|}
This assumption and its implications, according to Aizawa: "represent the heuristic that tf–idf employs."[9]
The conditional entropy of a "randomly chosen" document in the corpus
D
t
H({\calD}|{\calT}=t)=-\sumdpd|tlogpd|t=-log
1 | |
|\{d\inD:t\ind\ |
|}=log
|\{d\inD:t\ind\ | |
|}{|D|} |
+log|D|=-idf(t)+log|D|
In terms of notation,
{\calD}
{\calT}
M({\calT};{\calD})=H({\calD})-H({\calD}|{\calT})=\sumtpt ⋅ (H({\calD})-H({\calD}|W=t))=\sumtpt ⋅ idf(t)
The last step is to expand
pt
M({\calT};{\calD})=\sumt,dpt|d ⋅ pd ⋅ idf(t)=\sumt,dtf(t,d) ⋅
1 | |
|D| |
⋅ idf(t)=
1 | |
|D| |
\sumt,dtf(t,d) ⋅ idf(t).
This expression shows that summing the Tf–idf of all possible terms and documents recovers the mutual information between documents and term taking into account all the specificities of their joint distribution.[9] Each Tf–idf hence carries the "bit of information" attached to a term x document pair.
Suppose that we have term count tables of a corpus consisting of only two documents, as listed on the right.
this | 1 | |
is | 1 | |
another | 2 | |
example | 3 |
this | 1 | |
is | 1 | |
a | 2 | |
sample | 1 |
The calculation of tf–idf for the term "this" is performed as follows:
In its raw frequency form, tf is just the frequency of the "this" for each document. In each document, the word "this" appears once; but as the document 2 has more words, its relative frequency is smaller.
tf(''this'',d1)=
1 | |
5 |
=0.2
tf(''this'',d2)=
1 | |
7 |
≈ 0.14
An idf is constant per corpus, and accounts for the ratio of documents that include the word "this". In this case, we have a corpus of two documents and all of them include the word "this".
idf(''this'',D)=log\left(
2 | |
2 |
\right)=0
So tf–idf is zero for the word "this", which implies that the word is not very informative as it appears in all documents.
tfidf(''this'',d1,D)=0.2 x 0=0
tfidf(''this'',d2,D)=0.14 x 0=0
The word "example" is more interesting - it occurs three times, but only in the second document:
tf(''example'',d1)=
0 | |
5 |
=0
tf(''example'',d2)=
3 | |
7 |
≈ 0.429
idf(''example'',D)=log\left(
2 | |
1 |
\right)=0.301
Finally,
tfidf(''example'',d1,D)=tf(''example'',d1) x idf(''example'',D)=0 x 0.301=0
tfidf(''example'',d2,D)=tf(''example'',d2) x idf(''example'',D)=0.429 x 0.301 ≈ 0.129
(using the base 10 logarithm).
The idea behind tf–idf also applies to entities other than terms. In 1998, the concept of idf was applied to citations.[10] The authors argued that "if a very uncommon citation is shared by two documents, this should be weighted more highly than a citation made by a large number of documents". In addition, tf–idf was applied to "visual words" with the purpose of conducting object matching in videos,[11] and entire sentences.[12] However, the concept of tf–idf did not prove to be more effective in all cases than a plain tf scheme (without idf). When tf–idf was applied to citations, researchers could find no improvement over a simple citation-count weight that had no idf component.[13]
A number of term-weighting schemes have derived from tf–idf. One of them is TF–PDF (term frequency * proportional document frequency).[14] TF–PDF was introduced in 2001 in the context of identifying emerging topics in the media. The PDF component measures the difference of how often a term occurs in different domains. Another derivate is TF–IDuF. In TF–IDuF,[15] idf is not calculated based on the document corpus that is to be searched or recommended. Instead, idf is calculated on users' personal document collections. The authors report that TF–IDuF was equally effective as tf–idf but could also be applied in situations when, e.g., a user modeling system has no access to a global document corpus.