Meta-circular evaluator explained

In computing, a meta-circular evaluator (MCE) or meta-circular interpreter (MCI) is an interpreter which defines each feature of the interpreted language using a similar facility of the interpreter's host language. For example, interpreting a lambda application may be implemented using function application.[1] Meta-circular evaluation is most prominent in the context of Lisp.[1] [2] A self-interpreter is a meta-circular interpreter where the interpreted language is nearly identical to the host language; the two terms are often used synonymously.

History

See also: History of compiler construction. The dissertation of Corrado Böhm[3] describes the design of a self-hosting compiler.[4] Due to the difficulty of compiling higher-order functions, many languages were instead defined via interpreters, most prominently Lisp.[1] [5] The term itself was coined by John C. Reynolds,[1] and popularized through its use in the book Structure and Interpretation of Computer Programs.[6] [7]

Self-interpreters

A self-interpreter is a meta-circular interpreter where the host language is also the language being interpreted.[8] A self-interpreter displays a universal function for the language in question, and can be helpful in learning certain aspects of the language.[2] A self-interpreter will provide a circular, vacuous definition of most language constructs and thus provides little insight into the interpreted language's semantics, for example evaluation strategy. Addressing these issues produces the more general notion of a "definitional interpreter".[1]

From self-interpreter to abstract machine

This part is based on Section 3.2.4 of Danvy's thesis.[9]

Here is the core of a self-evaluator for the

λ

calculus. The abstract syntax of the

λ

calculus isimplemented as follows in OCaml, representing variables with theirde Bruijn index, i.e., with their lexical offset(starting from 0):

type term = IND of int (* de Bruijn index *) | ABS of term | APP of term * term

The evaluator uses an environment:

type value = FUN of (value -> value)

let rec eval (t : term) (e : value list) : value = match t with IND n -> List.nth e n | ABS t' -> FUN (fun v -> eval t' (v :: e)) | APP (t0, t1) -> apply (eval t0 e) (eval t1 e)and apply (FUN f : value) (a : value) = f a

let main (t : term) : value = eval t []

Values (of type value) conflate expressible values (theresult of evaluating an expression in an environment) and denotablevalues (the values denoted by variables in the environment), aterminology that is due to Christopher Strachey.[10] [11]

Environments are represented as lists of denotable values.

The core evaluator has three clauses:

This evaluator is compositional in thateach of its recursive calls is made over a proper sub-part of the giventerm. It is also higher order since the domainof values is a function space.

In "Definitional Interpreters", Reynolds answered the question as to whether such a self-interpreter is well defined.He answered in the negative because the evaluation strategy of the defined language (the source language) is determined by the evaluation strategy of the defining language (the meta-language).If the meta-language follows call by value (as OCaml does), the source language follows call by value.If the meta-language follows call by name (as Algol 60 does), the source language follows call by name.And if the meta-language follows call by need (as Haskell does), the source language follows call by need.

In "Definitional Interpreters", Reynolds made a self-interpreter well defined by making it independent of the evaluation strategy of its defining language.He fixed the evaluation strategy by transforming the self-interpreter into Continuation-Passing Style, which is evaluation-strategy independent, as later captured in Gordon Plotkin's Independence Theorems.[12]

Furthermore, because logical relations had yet to be discovered, Reynolds made the resulting continuation-passing evaluator first order by (1) closure-converting it and (2) defunctionalizing the continuation.He pointed out the "machine-like quality" of the resulting interpreter, which is the origin of the CEK machine since Reynolds's CPS transformation was for call by value.[13] For call by name, these transformations map the self-interpreter to an early instance of the Krivine machine.[14] The SECD machine and many other abstract machines can be inter-derived this way.[15] [16]

It is remarkable that the three most famous abstract machines for the

λ

calculus functionally correspond to the same self-interpreter.

Self-interpretation in total programming languages

Total functional programming languages that are strongly normalizing cannot be Turing complete, otherwise one could solve the halting problem by seeing if the program type-checks. That means that there are computable functions that cannot be defined in the total language.[17] In particular it is impossible to define a self-interpreter in a total programming language, for example in any of the typed lambda calculi such as the simply typed lambda calculus, Jean-Yves Girard's System F, or Thierry Coquand's calculus of constructions.[18] [19] Here, by "self-interpreter" we mean a program that takes a source term representation in some plain format (such as a string of characters) and returns a representation of the corresponding normalized term. This impossibility result does not hold for other definitions of "self-interpreter".For example, some authors have referred to functions of type

\pi\tau\to\tau

as self-interpreters, where

\pi\tau

is the type of representations of

\tau

-typed terms. To avoid confusion, we will refer to these functions as self-recognizers.Brown and Palsberg showed that self-recognizers could be defined in several strongly-normalizing languages, including System F and System Fω.[20] This turned out to be possible because the types of encoded terms being reflected in the types of their representations prevents constructing a diagonal argument.In their paper, Brown and Palsberg claim to disprove the "conventional wisdom" that self-interpretation is impossible (and they refer to Wikipedia as an example of the conventional wisdom), but what they actually disprove is the impossibility of self-recognizers, a distinct concept. In their follow-up work, they switch to the more specific "self-recognizer" terminology used here, notably distinguishing these from "self-evaluators", of type

\pi\tau\to\pi\tau

.[21] They also recognize that implementing self-evaluation seems harder than self-recognition, and leave the implementation of the former in a strongly-normalizing language as an open problem.

Uses

In combination with an existing language implementation, meta-circular interpreters provide a baseline system from which to extend a language, either upwards by adding more features or downwards by compiling away features rather than interpreting them.[22] They are also useful for writing tools that are tightly integrated with the programming language, such as sophisticated debuggers. A language designed with a meta-circular implementation in mind is often more suited for building languages in general, even ones completely different from the host language.

Examples

Many languages have one or more meta-circular implementations. Here below is a partial list.

Some languages with a meta-circular implementation designed from the bottom up, in grouped chronological order:

Some languages with a meta-circular implementation via third parties:

See also

External links

Notes and References

  1. Reynolds. John C.. Proceedings of the ACM annual conference on - ACM '72. Definitional Interpreters for Higher-Order Programming Languages. 1972. 2. Proceedings of 25th ACM National Conference. 717–740. 10.1145/800194.805852. 14 April 2017. John C. Reynolds.
  2. Reynolds. John C.. 1998. Definitional Interpreters Revisited. Higher-Order and Symbolic Computation. 11. 4. 355–361. 10.1023/A:1010075320153. 34126862. 21 March 2023. John C. Reynolds.
  3. Böhm. Corrado. Calculatrices digitales. Du déchiffrage des formules logico-mathématiques par la machine même dans la conception du programme. 1954. Ann. Mat. Pura Appl.. 4. 37. 1–51. Corrado Böhm.
  4. Book: Donald E.. Knuth. Donald Knuth. Luis Trabb. Pardo. The early development of programming languages. August 1976. 36.
  5. Book: http://www.softwarepreservation.org/projects/LISP/book/LISP%201.5%20Programmers%20Manual.pdf. 10. A Universal LISP Function. 1961. John McCarthy (computer scientist). John. McCarthy. Lisp 1.5 Programmer's Manual.
  6. Book: Structure and Interpretation of Computer Programs. The Metacircular Evaluator. http://mitpress.mit.edu/sites/default/files/sicp/full-text/book/book-Z-H-26.html#%_sec_4.1. MIT.
  7. Web site: Harvey. Brian. Why Structure and Interpretation of Computer Programs matters. people.eecs.berkeley.edu. 14 April 2017.
  8. Web site: The significance of the meta-circular interpreter . 2006-11-22 . Reginald. Braithwaite. 2011-01-22.
  9. An Analytical Approach to Programs as Data Objects. Danvy, Olivier. 2006. 10.7146/aul.214.152. 9788775073948 .
  10. Strachey. Christopher. 1967. Fundamental Concepts in Programming Languages. 10.1023/A:1010000313106.
  11. Mosses. Peter D.. 2000. A Foreword to 'Fundamental Concepts in Programming Languages'. Higher-Order and Symbolic Computation. 13. 1/2. 7–9. 10.1023/A:1010048229036. 39258759. Peter D. Mosses.
  12. Plotkin. Gordon D.. 1975. Call by name, call by value and the lambda-calculus. Theoretical Computer Science. 1. 2. 125–159. 10.1016/0304-3975(75)90017-1. Gordon D. Plotkin. free.
  13. Felleisen. Matthias. Daniel P. Friedman. Friedman. Daniel. 1986. Control Operators, the SECD Machine, and the lambda-Calculus. Formal Description of Programming Concepts III, Elsevier Science Publishers B.V. (North-Holland). 193–217. Matthias Felleisen.
  14. Schmidt. David A.. 1980. State transition machines for lambda calculus expressions. State transition machines for lambda-calculus expressions. Lecture Notes in Computer Science. 94. Semantics-Directed Compiler Generation, LNCS 94. 415–440. 10.1007/3-540-10250-7_32. 978-3-540-10250-2. free.
  15. Danvy. Olivier. 2004. A Rational Deconstruction of Landin's SECD Machine. Implementation and Application of Functional Languages, 16th International Workshop, IFL 2004, Revised Selected Papers, Lecture Notes in Computer Science 3474, Springer. 52–71. 0909-0878.
  16. Ager. Mads Sig. Biernacki. Dariusz. Olivier Danvy. Danvy. Olivier. Midtgaard. Jan. 2003. A Functional Correspondence between Evaluators and Abstract Machines. Brics Report Series. 10. 13. 5th International ACM SIGPLAN Conference on Principles and Practice of Declarative Programming (PPDP'03). 8–19. 10.7146/brics.v10i13.21783. free.
  17. Book: Riolo . Rick . Worzel . William P. . Kotanchek . Mark . Genetic Programming Theory and Practice XII . 4 June 2015 . Springer . 978-3-319-16030-6. 59 . 8 September 2021 . en.
  18. [Conor McBride]
  19. Andrej Bauer (June 2014), Answer to: A total language that only a Turing complete language can interpret (posted to the Theoretical Computer Science StackExchange site)
  20. Book: Brown . Matt . Palsberg . Jens . Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages . Breaking through the normalization barrier: A self-interpreter for f-omega . 11 January 2016 . 5–17 . 10.1145/2837614.2837623 . 9781450335492 . 14781370 . http://web.cs.ucla.edu/~palsberg/paper/popl16-full.pdf.
  21. Book: Brown . Matt . Palsberg . Jens . Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages . Typed self-evaluation via intensional type functions . January 2017 . 415–428 . 10.1145/3009837.3009853. 9781450346603 . free .
  22. Book: Oriol. Manuel. Bertrand Meyer . Meyer. Bertrand. Objects, Components, Models and Patterns: 47th International Conference, TOOLS EUROPE 2009, Zurich, Switzerland, June 29-July 3, 2009, Proceedings. Springer Science & Business Media. 9783642025716. 330. 14 April 2017. en. 2009-06-29.
  23. http://pico.vub.ac.be/mc/mccode/MCeval.html Meta-circular implementation of the Pico programming language