Noisy channel model explained

The noisy channel model is a framework used in spell checkers,question answering, speech recognition, and machine translation.In this model, the goal is to find the intended word given a word where theletters have been scrambled in some manner.

In spell-checking

See Chapter B of.[1]

Given an alphabet

\Sigma

, let

\Sigma*

be the setof all finite strings over

\Sigma

. Let the dictionary

D

of valid words be some subset of

\Sigma*

, i.e.,

D\subseteq\Sigma*

.

The noisy channel is the matrix

\Gammaws=\Pr(s|w)

,

where

w\inD

is the intended word and

s\in\Sigma*

is the scrambled word that was actually received.

The goal of the noisy channel model is to find the intended word given thescrambled word that was received. The decision function

\sigma:\Sigma*\toD

is a function that, given a scrambled word, returnsthe intended word.

Methods of constructing a decision function include themaximum likelihood rule, themaximum a posteriori rule, and theminimum distance rule.

In some cases, it may be better to accept the scrambled word as the intendedword rather than attempt to find an intended word in the dictionary. Forexample, the word schönfinkeling may not be in the dictionary, but mightin fact be the intended word.

Example

Consider the English alphabet

\Sigma=\{a,b,c,...,y,z,A,B,...,Z,...\}

. Some subset

D\subseteq\Sigma*

makes up the dictionary of valid Englishwords.

There are several mistakes that may occur while typing, including:

  1. Missing letters, e.g., instead of letter
  2. Accidental letter additions, e.g., instead of mistake
  3. Swapping letters, e.g., instead of received
  4. Replacing letters, e.g., instead of finite

To construct the noisy channel matrix

\Gamma

, we must considerthe probability of each mistake, given the intended word(

\Pr(s|w)

for all

w\inD

and

s\in\Sigma*

). These probabilities may be gathered, forexample, by considering the Damerau–Levenshtein distance between

s

and

w

or by comparing the draft of an essay with one that hasbeen manually edited for spelling.

In machine translation

See chapter 1, and chapter 25 of.[2]

Suppose we want to translate a foreign language to English, we could model

P(E|F)

directly: the probability that we have English sentence E given foreign sentence F, then we pick the most likely one

\hatE=\argmaxEP(E|F)

. However, by Bayes law, we have the equivalent equation:\hat=\underset \overbrace^ \overbrace^The benefit of the noisy-channel model is in terms of data: If collecting a parallel corpus is costly, then we would have only a small parallel corpus, so we can only train a moderately good English-to-foreign translation model, and a moderately good foreign-to-English translation model. However, we can collect a large corpus in the foreign language only, and a large corpus in the English language only, to train two good language models. Combining these four models, we immediately get a good English-to-foreign translator and a good foreign-to-English translator.[3]

The cost of noisy-channel model is that using Bayesian inference is more costly than using a translation model directly. Instead of reading out the most likely translation by

\argmaxEP(E|F)

, it would have to read out predictions by both the translation model and the language model, multiply them, and search for the highest number.

In speech recognition

Speech recognition can be thought of as translating from a sound-language to a text-language. Consequently, we have\hat=\underset \overbrace^ \overbrace^where

P(S|T)

is the probability that a speech sound S is produced if the speaker is intending to say text T. Intuitively, this equation states that the most likely text is a text that's both a likely text in the language, and produces the speech sound with high probability.

The utility of the noisy-channel model is not in capacity. Theoretically, any noisy-channel model can be replicated by a direct

P(T|S)

model. However, the noisy-channel model factors the model into two parts which are appropriate for the situation, and consequently it is generally more well-behaved.

When a human speaks, it does not produce the sound directly, but first produces the text it wants to speak in the language centers of the brain, then the text is translated into sound by the motor cortex, vocal cords, and other parts of the body. The noisy-channel model matches this model of the human, and so it is appropriate. This is justified in the practical success of noisy-channel model in speech recognition.

Example

Consider the sound-language sentence (written in IPA for English) S = aɪ wʊd laɪk wʌn tuː. There are three possible texts

T1,T2,T3

:

T1=

I would like one to.

T2=

I would like one too.

T3=

I would like one two.

that are equally likely, in the sense that

P(S|T1)=P(S|T2)=P(S|T3)

. With a good English language model, we would have

P(T2)>P(T1)>P(T3)

, since the second sentence is grammatical, the first is not quite, but close to a grammatical one (such as "I would like one to [go]."), while the third one is far from grammatical.

Consequently, the noisy-channel model would output

T2

as the best transcription.

See also

References

Notes and References

  1. Speech and Language Processing. Daniel Jurafsky & James H. Martin. Copyright © 2023. All rights reserved. Draft of January 7, 2023. https://web.stanford.edu/~jurafsky/slp3/B.pdf
  2. Book: Jurafsky, Dan . Speech and language processing: an introduction to natural language processing, computational linguistics, and speech recognition . 2009 . James H. Martin . 978-0-13-187321-6 . 2nd . Upper Saddle River, N.J. . 213375806.
  3. Brown . Peter F. . Della Pietra . Stephen A. . Della Pietra . Vincent J. . Mercer . Robert L. . 1993 . Hirschberg . Julia . The Mathematics of Statistical Machine Translation: Parameter Estimation . Computational Linguistics . 19 . 2 . 263–311.