Rule-based machine translation explained

Rule-based machine translation (RBMT; "Classical Approach" of MT) is machine translation systems based on linguistic information about source and target languages basically retrieved from (unilingual, bilingual or multilingual) dictionaries and grammars covering the main semantic, morphological, and syntactic regularities of each language respectively. Having input sentences (in some source language), an RBMT system generates them to output sentences (in some target language) on the basis of morphological, syntactic, and semantic analysis of both the source and the target languages involved in a concrete translation task. RBMT has been progressively superseded by more efficient methods, particularly neural machine translation.[1]

History

See also: History of machine translation. The first RBMT systems were developed in the early 1970s. The most important steps of this evolution were the emergence of the following RBMT systems:

Today, other common RBMT systems include:

Types of RBMT

There are three different types of rule-based machine translation systems:

  1. Direct Systems (Dictionary Based Machine Translation) map input to output with basic rules.
  2. Transfer RBMT Systems (Transfer Based Machine Translation) employ morphological and syntactical analysis.
  3. Interlingual RBMT Systems (Interlingua) use an abstract meaning.[4] [5]

RBMT systems can also be characterized as the systems opposite to Example-based Systems of Machine Translation (Example Based Machine Translation), whereas Hybrid Machine Translations Systems make use of many principles derived from RBMT.

Basic principles

The main approach of RBMT systems is based on linking the structure of the given input sentence with the structure of the demanded output sentence, necessarily preserving their unique meaning. The following example can illustrate the general frame of RBMT:

A girl eats an apple. Source Language = English; Demanded Target Language = German

Minimally, to get a German translation of this English sentence one needs:

  1. A dictionary that will map each English word to an appropriate German word.
  2. Rules representing regular English sentence structure.
  3. Rules representing regular German sentence structure.

And finally, we need rules according to which one can relate these two structures together.

Accordingly, we can state the following stages of translation:

1st: getting basic part-of-speech information of each source word:

a = indef.article; girl = noun; eats = verb; an = indef.article; apple = noun

2nd: getting syntactic information about the verb "to eat":

NP-eat-NP; here: eat – Present Simple, 3rd Person Singular, Active Voice

3rd: parsing the source sentence:

(NP an apple) = the object of eat

Often only partial parsing is sufficient to get to the syntactic structure of the source sentence and to map it onto the structure of the target sentence.

4th: translate English words into German

a (category = indef.article) => ein (category = indef.article)

girl (category = noun) => Mädchen (category = noun)

eat (category = verb) => essen (category = verb)

an (category = indef. article) => ein (category = indef.article)

apple (category = noun) => Apfel (category = noun)

5th: Mapping dictionary entries into appropriate inflected forms (final generation):

A girl eats an apple. => Ein Mädchen isst einen Apfel.

Ontologies

An ontology is a formal representation of knowledge that includes the concepts (such as objects, processes etc.) in a domain and some relations between them. If the stored information is of linguistic nature, one can speak of a lexicon.[6] In NLP, ontologies can be used as a source of knowledge for machine translation systems. With access to a large knowledge base, rule-based systems can be enabled to resolve many (especially lexical) ambiguities on their own. In the following classic examples, as humans, we are able to interpret the prepositional phrase according to the context because we use our world knowledge, stored in our lexicons:

I saw a man/star/molecule with a microscope/telescope/binoculars.
Since the syntax does not change, a traditional rule-based machine translation system may not be able to differentiate between the meanings. With a large enough ontology as a source of knowledge however, the possible interpretations of ambiguous words in a specific context can be reduced.[7]

Building ontologies

The ontology generated for the PANGLOSS knowledge-based machine translation system in 1993 may serve as an example of how an ontology for NLP purposes can be compiled:[8] [9]

Components

The RBMT system contains:

a SL dictionary - needed by the source language morphological analyser for morphological analysis,

a bilingual dictionary - used by the translator to translate source language words into target language words,

a TL dictionary - needed by the target language morphological generator to generate target language words.[10]

The RBMT system makes use of the following:

Advantages

Shortcomings

Literature

Links

Notes and References

  1. Wang . Haifeng . Wu . Hua . He . Zhongjun . Huang . Liang . Church . Kenneth Ward . 2022-11-01 . Progress in Machine Translation . Engineering . 2095-8099.
  2. Web site: MT Software . dead . https://web.archive.org/web/20050204095354/http://aamt.info/english/mtsys.htm . 2005-02-04 . AAMT.
  3. Web site: January 1992 . MACHINE TRANSLATION IN JAPAN . dead . https://web.archive.org/web/20180212111302/http://www.wtec.org/loyola/ar93_94/mt.htm . 2018-02-12 . www.wtec.org.
  4. Book: Koehn, Philipp . Statistical Machine Translation . 2010 . Cambridge University Press . Cambridge . 15. 9780521874151 .
  5. 40008396 . Knowledge-Based Machine Translation . Nirenburg . Sergei . 1989 . Machine Trandation 4 (1989), 5 - 24 . 4 . 1 . 5–24 . Kluwer Academic Publishers .
  6. Vossen, Piek: Ontologies. In: Mitkov, Ruslan (ed.) (2003): Handbook of Computational Linguistics, Chapter 25. Oxford: Oxford University Press.
  7. Vossen, Piek: Ontologies. In: Mitkov, Ruslan (ed.) (2003): Handbook of Computational Linguistics, Chapter 25. Oxford: Oxford University Press.
  8. Book: Knight, Kevin . Human Language Technology: Proceedings of a Workshop Held at Plainsboro, New Jersey, March 21–24, 1993 . 1993 . Association for Computational Linguistics . 978-1-55860-324-0 . Princeton, New Jersey . 185–190 . Building a Large Ontology for Machine Translation . 10.3115/1075671.1075713 . free.
  9. Knight . Kevin . Luk . Steve K. . 1994 . Building a Large-Scale Knowledge Base for Machine Translation . Paper presented at the Twelfth National Conference on Artificial Intelligence . cmp-lg/9407029.
  10. Book: Computational Model of Grammar for English to Sinhala Machine Translation . Hettige . B. . 2011 International Conference on Advances in ICT for Emerging Regions (ICTer) . 26–31 . Karunananda . A.S. . 2011 . 10.1109/ICTer.2011.6075022 . 978-1-4577-1114-5 . 45871137 .
  11. Acquisition of Large Lexicons for Practical Knowledge-Based MT . 9 . 3–4 . 251–283 . Lonsdale . Deryle . Mitamura . Teruko . Nyberg . Eric . 1995 . Machine Translation . Kluwer Academic Publishers . 10.1007/BF00980580 . 1106335 .
  12. Web site: Statistical Post-Editing of a Rule-Based Machine Translation System . Lagarda . A.-L. . Alabau . V. . Casacuberta . F. . Silva . R. . Díaz-de-Liaño . E. . 2009 . Proceedings of NAACL HLT 2009: Short Papers, pages 217–220, Boulder, Colorado . Association for Computational Linguistics . 20 June 2012 .