Kneser–Ney smoothing explained

Kneser–Ney smoothing, also known as Kneser-Essen-Ney smoothing, is a method primarily used to calculate the probability distribution of n-grams in a document based on their histories.[1] It is widely considered the most effective method of smoothing due to its use of absolute discounting by subtracting a fixed value from the probability's lower order terms to omit n-grams with lower frequencies. This approach has been considered equally effective for both higher and lower order n-grams. The method was proposed in a 1994 paper by Reinhard Kneser, Ute Essen and .[2]

A common example that illustrates the concept behind this method is the frequency of the bigram "San Francisco". If it appears several times in a training corpus, the frequency of the unigram "Francisco" will also be high. Relying on only the unigram frequency to predict the frequencies of n-grams leads to skewed results;[3] however, Kneser–Ney smoothing corrects this by considering the frequency of the unigram in relation to possible words preceding it.

Method

Let

c(w,w')

be the number of occurrences of the word

w

followed by the word

w'

in the corpus.

The equation for bigram probabilities is as follows:

pKN(wi|wi-1)=

max(c(wi-1,wi)-\delta,0)
\sumw'c(wi-1,w')

+

λ
wi-1

pKN(wi)

[4]

Where the unigram probability

pKN(wi)

depends on how likely it is to see the word

wi

in an unfamiliar context, which is estimated as the number of times it appears after any other word divided by the number of distinct pairs of consecutive words in the corpus:

pKN(wi)=

|\{w':0<c(w',wi)\
|

}{|\{(w',w''):0<c(w',w'')\}|}

Note that

pKN

is a proper distribution, as the values defined in the above way are non-negative and sum to one.

The parameter

\delta

is a constant which denotes the discount value subtracted from the count of each n-gram, usually between 0 and 1.

The value of the normalizing constant

λ
wi-1
is calculated to make the sum of conditional probabilities

pKN(wi|wi-1)

over all

wi

equal to one. Observe that (provided

\delta<1

) for each

wi

which occurs at least once in the context of

wi-1

in the corpus we discount the probability by exactly the same constant amount

{\delta}/\left(\sumw'c(wi-1,w')\right)

,so the total discount depends linearly on the number of unique words

wi

that can occur after

wi-1

.This total discount is a budget we can spread over all

pKN(wi|wi-1)

proportionally to

pKN(wi)

.As the values of

pKN(wi)

sum to one, we can simply define
λ
wi-1
to be equal to this total discount:
λ
wi-1

=

\delta
\sumw'c(wi-1,w')

|\{w':0<c(wi-1,w')\}|

This equation can be extended to n-grams. Let

i-1
w
i-n+1
be the

n-1

words before

wi

:

pKN(wi|w

i-1
i-n+1

)=

i-1
max(c(w,wi)-\delta,0)
i-n+1
\sum
i-1
c(w
i-n+1
,w')
w'

+\delta

|\{w':0<
i-1
c(w
i-n+1
,w')\
|}{\sum

w'

i-1
c(w
i-n+1

,w')}pKN(wi|w

i-1
i-n+2

)

[5]

This model uses the concept of absolute-discounting interpolation which incorporates information from higher and lower order language models. The addition of the term for lower order n-grams adds more weight to the overall probability when the count for the higher order n-grams is zero.[6] Similarly, the weight of the lower order model decreases when the count of the n-gram is non zero.

Modified Kneser–Ney smoothing

Modifications of this method also exist. Chen and Goodman's 1998 paper lists and benchmarks several such modifications. Computational efficiency and scaling to multi-core systems is the focus of Chen and Goodman’s 1998 modification.[7] This approach is once used for Google Translate under a MapReduce implementation.[8] KenLM is a performant open-source implementation.[9]

Notes and References

  1. http://www.stats.ox.ac.uk/~teh/research/compling/hpylm.pdf 'A Bayesian Interpretation of Interpolated Kneser-Ney NUS School of Computing Technical Report TRA2/06'
  2. Ney . Hermann . Essen . Ute . Kneser . Reinhard . On structuring probabilistic dependences in stochastic language modelling . Computer Speech & Language . January 1994 . 8 . 1 . 1–38 . 10.1006/csla.1994.1001.
  3. http://cs.brown.edu/courses/cs146/assets/files/langmod.pdf 'Brown University: Introduction to Computational Linguistics '
  4. http://www.foldl.me/2014/kneser-ney-smoothing/ 'Kneser Ney Smoothing Explained'
  5. http://nlp.stanford.edu/~wcmac/papers/20050421-smoothing-tutorial.pdf 'NLP Tutorial: Smoothing'
  6. http://u.cs.biu.ac.il/~yogo/courses/mt2013/papers/chen-goodman-99.pdf 'An empirical study of smoothing techniques for language modeling'
  7. https://people.eecs.berkeley.edu/~klein/cs294-5/chen_goodman.pdf An Empirical Study of Smoothing Techniques for Language Modeling
  8. https://aclanthology.org/D07-1090.pdf Large Language Models in Machine Translation
  9. Web site: Estimation . Kenlm . Code . Kenneth Heafield .