Inductive programming explained

Inductive programming (IP) is a special area of automatic programming, covering research from artificial intelligence and programming, which addresses learning of typically declarative (logic or functional) and often recursive programs from incomplete specifications, such as input/output examples or constraints.

Depending on the programming language used, there are several kinds of inductive programming. Inductive functional programming, which uses functional programming languages such as Lisp or Haskell, and most especially inductive logic programming, which uses logic programming languages such as Prolog and other logical representations such as description logics, have been more prominent, but other (programming) language paradigms have also been used, such as constraint programming or probabilistic programming.

Definition

Inductive programming incorporates all approaches which are concerned with learning programs or algorithms from incomplete (formal) specifications. Possible inputs in an IP system are a set of training inputs and corresponding outputs or an output evaluation function, describing the desired behavior of the intended program, traces or action sequences which describe the process of calculating specific outputs, constraints for the program to be induced concerning its time efficiency or its complexity, various kinds of background knowledge such as standard data types, predefined functions to be used, program schemes or templates describing the data flow of the intended program, heuristics for guiding the search for a solution or other biases.

Output of an IP system is a program in some arbitrary programming language containing conditionals and loop or recursive control structures, or any other kind of Turing-complete representation language.

In many applications the output program must be correct with respect to the examples and partial specification, and this leads to the consideration of inductive programming as a special area inside automatic programming or program synthesis,[1] [2] usually opposed to 'deductive' program synthesis,[3] [4] [5] where the specification is usually complete.

In other cases, inductive programming is seen as a more general area where any declarative programming or representation language can be used and we may even have some degree of error in the examples, as in general machine learning, the more specific area of structure mining or the area of symbolic artificial intelligence. A distinctive feature is the number of examples or partial specification needed. Typically, inductive programming techniques can learn from just a few examples.

The diversity of inductive programming usually comes from the applications and the languages that are used: apart from logic programming and functional programming, other programming paradigms and representation languages have been used or suggested in inductive programming, such as functional logic programming, constraint programming, probabilistic programming, abductive logic programming, modal logic, action languages, agent languages and many types of imperative languages.

History

Research on the inductive synthesis of recursive functional programs started in the early 1970s and was brought onto firm theoretical foundations with the seminal THESIS system of Summers[6] and work of Biermann.[7] These approaches were split into two phases: first, input-output examples are transformed into non-recursive programs (traces) using a small set of basic operators; second, regularities in the traces are searched for and used to fold them into a recursive program. The main results until the mid-1980s are surveyed by Smith.[8] Due to limited progress with respect to the range of programs that could be synthesized, research activities decreased significantly in the next decade.

The advent of logic programming brought a new elan but also a new direction in the early 1980s, especially due to the MIS system of Shapiro[9] eventually spawning the new field of inductive logic programming (ILP).[10] The early works of Plotkin,[11] [12] and his "relative least general generalization (rlgg)", had an enormous impact in inductive logic programming. Most of ILP work addresses a wider class of problems, as the focus is not only on recursive logic programs but on machine learning of symbolic hypotheses from logical representations. However, there were some encouraging results on learning recursive Prolog programs such as quicksort from examples together with suitable background knowledge, for example with GOLEM.[13] But again, after initial success, the community got disappointed by limited progress about the induction of recursive programs[14] [15] [16] with ILP less and less focusing on recursive programs and leaning more and more towards a machine learning setting with applications in relational data mining and knowledge discovery.

In parallel to work in ILP, Koza[17] proposed genetic programming in the early 1990s as a generate-and-test based approach to learning programs. The idea of genetic programming was further developed into the inductive programming system ADATE[18] and the systematic-search-based system MagicHaskeller.[19] Here again, functional programs are learned from sets of positive examples together with an output evaluation (fitness) function which specifies the desired input/output behavior of the program to be learned.

The early work in grammar induction (also known as grammatical inference) is related to inductive programming, as rewriting systems or logic programs can be used to represent production rules. In fact, early works in inductive inference considered grammar induction and Lisp program inference as basically the same problem.[20] The results in terms of learnability were related to classical concepts, such as identification-in-the-limit, as introduced in the seminal work of Gold.[21] More recently, the language learning problem was addressed by the inductive programming community.[22] [23]

In the recent years, the classical approaches have been resumed and advanced with great success. Therefore, the synthesis problem has been reformulated on the background of constructor-based term rewriting systems taking into account modern techniques of functional programming, as well as moderate use of search-based strategies and usage of background knowledge as well as automatic invention of subprograms. Many new and successful applications have recently appeared beyond program synthesis, most especially in the area of data manipulation, programming by example and cognitive modelling (see below).

Other ideas have also been explored with the common characteristic of using declarative languages for the representation of hypotheses. For instance, the use of higher-order features, schemes or structured distances have been advocated for a better handling of recursive data types and structures;[24] [25] [26] abstraction has also been explored as a more powerful approach to cumulative learning and function invention.[27] [28]

One powerful paradigm that has been recently used for the representation of hypotheses in inductive programming (generally in the form of generative models) is probabilistic programming (and related paradigms, such as stochastic logic programs and Bayesian logic programming).[29] [30] [28] [31]

Application areas

The first workshop on Approaches and Applications of Inductive Programming (AAIP) held in conjunction with ICML 2005 identified all applications where "learning of programs or recursive rules are called for, [...] first in the domain of software engineering where structural learning, software assistants and software agents can help to relieve programmers from routine tasks, give programming support for end users, or support of novice programmers and programming tutor systems. Further areas of application are language learning, learning recursive control rules for AI-planning, learning recursive concepts in web-mining or for data-format transformations".

Since then, these and many other areas have shown to be successful application niches for inductive programming, such as end-user programming,[32] the related areas of programming by example[33] and programming by demonstration,[34] and intelligent tutoring systems.

Other areas where inductive inference has been recently applied are knowledge acquisition,[35] artificial general intelligence,[36] reinforcement learning and theory evaluation,[37] [38] and cognitive science in general.[39] [31] There may also be prospective applications in intelligent agents, games, robotics, personalisation, ambient intelligence and human interfaces.

See also

Further reading

External links

Notes and References

  1. A.W.. Biermann. Automatic programming. S.C.. Shapiro. Encyclopedia of Artificial Intelligence. 18–35. 1992.
  2. Book: C.. Rich. R.C.. Waters. Approaches to automatic programming. M.C.. Yovits. Advances in Computers. 37. 1–57. 1993. 10.1016/S0065-2458(08)60402-7. 9780120121373.
  3. Book: M.L.. Lowry. R.D.. McCarthy. Automatic software design. 1991.
  4. Z.. Manna. R.. Waldinger. Fundamentals of deductive program synthesis. IEEE Trans Softw Eng. 18 . 8. 674–704. 1992. 10.1109/32.153379. 10.1.1.51.817.
  5. Book: P.. Flener. Achievements and Prospects of Program Synthesis . Computational Logic: Logic Programming and Beyond; Essays in Honour of Robert A. Kowalski. A.. Kakas. F.. Sadri. LNAI 2407. 310–346. 2002. 10.1007/3-540-45628-7_13. Lecture Notes in Computer Science. 978-3-540-43959-2.
  6. P.D.. Summers. A methodology for LISP program construction from examples. J ACM. 24 . 1. 161–175. 1977. 10.1145/321992.322002. 7474210. free.
  7. A.W.. Biermann. The inference of regular LISP programs from examples. IEEE Trans Syst Man Cybern. 8 . 8. 585–600. 1978. 10.1109/tsmc.1978.4310035. 15277948.
  8. D.R.. Smith. The synthesis of LISP programs from examples: a survey. A.W.. Biermann. G.. Guiho. Automatic Program Construction Techniques. 307–324. 1984.
  9. Book: E.Y.. Shapiro. Algorithmic program debugging. The MIT Press. 1983.
  10. Muggleton . S. . Inductive logic programming . 10.1007/BF03037089 . New Generation Computing . 8 . 4 . 295–318 . 1991 . 10.1.1.329.5312 . 5462416 .
  11. Gordon D.. Plotkin. A Note on Inductive Generalization. B.. Meltzer. D.. Michie. Machine Intelligence. 5. 153–163. 1970.
  12. Gordon D.. Plotkin. A Further Note on Inductive Generalization. B.. Meltzer. D.. Michie. Machine Intelligence. 6. 101–124. 1971.
  13. S.H.. Muggleton. C.. Feng. 14992676. Efficient induction of logic programs. Proceedings of the Workshop on Algorithmic Learning Theory. 6. 368–381. 1990.
  14. J.R.. Quinlan. R.M.. Cameron-Jones. 11138624. Avoiding Pitfalls When Learning Recursive Theories. IJCAI. 1050–1057. 1993.
  15. J.R.. Quinlan. R.M.. Cameron-Jones. Induction of logic programs: FOIL and related systems. Springer. 13. 3–4. 287–312. 1995. 2017-09-07. 2017-09-07. https://web.archive.org/web/20170907080358/http://dottorato.di.uniba.it/dottoratoXXVI/dm/FOILvsRelatedSystems.pdf. dead.
  16. P.. Flener. S.. Yilmaz. Inductive synthesis of recursive logic programs: Achievements and prospects. The Journal of Logic Programming. 41 . 2. 141–195. 1999. 10.1016/s0743-1066(99)00028-x. free.
  17. Book: J.R.. Koza. Genetic Programming: vol. 1, On the programming of computers by means of natural selection. MIT Press. 1992. 9780262111706.
  18. J.R.. Olsson. Inductive functional programming using incremental program transformation. Artificial Intelligence. 74 . 1. 55–83. 1995. 10.1016/0004-3702(94)00042-y. free.
  19. Book: Susumu. Katayama. PRICAI 2008: Trends in Artificial Intelligence. Efficient Exhaustive Generation of Functional Programs Using Monte-Carlo Search with Iterative Deepening. 5351. 199–210. 2008. http://nautilus.cs.miyazaki-u.ac.jp/~skata/skatayama_pricai2008.pdf. 10.1007/978-3-540-89197-0_21. 10.1.1.606.1447. Lecture Notes in Computer Science. 978-3-540-89196-3.
  20. D.. Angluin. Smith. C.H.. Inductive inference: Theory and methods. ACM Computing Surveys. 15. 3. 237–269. 1983. 10.1145/356914.356918. 3209224.
  21. E.M. . Gold . Language identification in the limit . Information and Control . 10 . 5 . 447–474 . 1967 . 10.1016/s0019-9958(67)91165-5 . free .
  22. Stephen. Muggleton. Inductive Logic Programming: Issues, Results and the Challenge of Learning Language in Logic. Artificial Intelligence. 114. 1–2. 283–296. 1999. 10.1016/s0004-3702(99)00067-3. free.
    here: Sect.2.1
  23. J.R.. Olsson. D.M.W.. Powers. Machine learning of human language through automatic programming. Proceedings of the International Conference on Cognitive Science. 507–512. 2003.
  24. J.W.. Lloyd. Knowledge Representation, Computation, and Learning in Higher-order Logic. 2001.
  25. Book: J.W.. Lloyd. Logic for learning: learning comprehensible theories from structured data. Springer. 2003. 9783662084069.
  26. V.. Estruch. C.. Ferri. J.. Hernandez-Orallo. M.J.. Ramirez-Quintana. 7255690. Bridging the gap between distance and generalization. Computational Intelligence. 30. 3. 473–513. 2014. 10.1111/coin.12004. free.
  27. R.J.. Henderson. S.H.. Muggleton. Automatic invention of functional abstractions. Advances in Inductive Logic Programming. 2012.
  28. H.. Irvin. A.. Stuhlmuller. N.D.. Goodman. Inducing probabilistic programs by Bayesian program merging. 1110.5667. 2011. cs.AI.
  29. S.. Muggleton. Learning stochastic logic programs. Electron. Trans. Artif. Intell.. 4(B). 141–153. 2000. 2017-09-07. 2017-09-07. https://web.archive.org/web/20170907080041/https://ocs.aaai.org/Papers/Workshops/2000/WS-00-06/WS00-06-006.pdf. dead.
  30. Book: L.. De Raedt. K.. Kersting. Probabilistic inductive logic programming. Springer. 2008.
  31. A.. Stuhlmuller. N.D.. Goodman. 7602205. Reasoning about reasoning by nested conditioning: Modeling theory of mind with probabilistic programs. Cognitive Systems Research. 2012. 28. 80–99. 10.1016/j.cogsys.2013.07.003.
  32. Book: H.. Lieberman. F.. Paternò. V.. Wulf. End user development. Springer. 2006.
  33. Book: H.. Lieberman. Your wish is my command: Programming by example. Morgan Kaufmann. 2001. 9781558606883.
  34. Book: E.. Cypher. D.C.. Halbert. Watch what I do: programming by demonstration . 1993. MIT Press . 9780262032131.
  35. U.. Schmid. Ute Schmid . M.. Hofmann. E.. Kitzelmann. Analytical inductive programming as a cognitive rule acquisition devise. Proceedings of the Second Conference on Artificial General Intelligence. 162–167. 2009.
  36. N.. Crossley. E.. Kitzelmann. M.. Hofmann. U.. Schmid. Ute Schmid. Combining analytical and evolutionary inductive programming. Proceedings of the Second Conference on Artificial General Intelligence. 19–24. 2009.
  37. J.. Hernandez-Orallo. Constructive reinforcement learning. International Journal of Intelligent Systems. 15 . 3. 241–264. 2000. 10.1002/(sici)1098-111x(200003)15:3<241::aid-int6>3.0.co;2-z. 10.1.1.34.8877. 123390956.
  38. C.. Kemp. N.. Goodman. J.B.. Tenenbaum. Learning and using relational theories. Advances in Neural Information Processing Systems. 753–760. 2007.
  39. U.. Schmid. Ute Schmid . E.. Kitzelmann. Inductive rule learning on the knowledge level. Cognitive Systems Research. 12 . 3. 237–248. 2011. 10.1016/j.cogsys.2010.12.002. 18613664.