Neuro-symbolic AI explained

Neuro-symbolic AI is a type of artificial intelligence that integrates neural and symbolic AI architectures to address the weaknesses of each, providing a robust AI capable of reasoning, learning, and cognitive modeling. As argued by Leslie Valiant and others,[1] the effective construction of rich computational cognitive models demands the combination of symbolic reasoning and efficient machine learning. Gary Marcus argued, "We cannot construct rich cognitive models in an adequate, automated way without the triumvirate of hybrid architecture, rich prior knowledge, and sophisticated techniques for reasoning." Further, "To build a robust, knowledge-driven approach to AI we must have the machinery of symbol manipulation in our toolkit. Too much useful knowledge is abstract to proceed without tools that represent and manipulate abstraction, and to date, the only known machinery that can manipulate such abstract knowledge reliably is the apparatus of symbol manipulation."

Henry Kautz, Francesca Rossi, and Bart Selman also argued for a synthesis. Their arguments attempt to address the two kinds of thinking, as discussed in Daniel Kahneman's book Thinking Fast and Slow. It describes cognition as encompassing two components: System 1 is fast, reflexive, intuitive, and unconscious. System 2 is slower, step-by-step, and explicit. System 1 is used for pattern recognition. System 2 handles planning, deduction, and deliberative thinking. In this view, deep learning best handles the first kind of cognition while symbolic reasoning best handles the second kind. Both are needed for a robust, reliable AI that can learn, reason, and interact with humans to accept advice and answer questions. Such dual-process models with explicit references to the two contrasting systems have been worked on since the 1990s, both in AI and in Cognitive Science, by multiple researchers.

Approaches

Approaches for integration are diverse. Henry Kautz's taxonomy of neuro-symbolic architectures follows, along with some examples:

These categories are not exhaustive, as they do not consider multi-agent systems. In 2005, Bader and Hitzler presented a more fine-grained categorization that considered, e.g., whether the use of symbols included logic and if it did, whether the logic was propositional or first-order logic. The 2005 categorization and Kautz's taxonomy above are compared and contrasted in a 2021 article.[4] Recently, Sepp Hochreiter argued that Graph Neural Networks "...are the predominant models of neural-symbolic computing"[5] since "[t]hey describe the properties of molecules, simulate social networks, or predict future states in physical and engineering applications with particle-particle interactions."[6]

Artificial general intelligence

Gary Marcus argues that "...hybrid architectures that combine learning and symbol manipulation are necessary for robust intelligence, but not sufficient", and that there are

This echoes earlier calls for hybrid models as early as the 1990s.

History

Garcez and Lamb described research in this area as ongoing at least since the 1990s. At that time, the terms symbolic and sub-symbolic AI were popular.

A series of workshops on neuro-symbolic AI has been held annually since 2005 Neuro-Symbolic Artificial Intelligence.[7] In the early 1990s, an initial set of workshops on this topic were organized.

Research

Key research questions remain, such as:

Implementations

Implementations of neuro-symbolic approaches include:

References

See also

External links

Notes and References

  1. Book: D'Avila Garcez . Artur S. . Neural-symbolic cognitive reasoning . Lamb . Luis C. . Gabbay . Dov M. . 2009 . Springer . 978-3-540-73245-7 . Cognitive technologies.
  2. Association for Computational Linguistics. 10.18653/v1/W16-1309. 45–50. Rocktäschel. Tim. Riedel. Sebastian. Learning Knowledge Base Inference with Neural Theorem Provers. Proceedings of the 5th Workshop on Automated Knowledge Base Construction. San Diego, CA. 2022-08-06. 2016. free.
  3. 1606.04422. Serafini. Luciano. Garcez. Artur d'Avila. Logic Tensor Networks: Deep Learning and Logical Reasoning from Data and Knowledge . 2016 . cs.AI.
  4. Sarker. Md Kamruzzaman. Zhou. Lu. Eberhart. Aaron. Pascal Hitzler. Hitzler. Pascal. Neuro-symbolic artificial intelligence: Current trends . AI Communications . 34 . 3 . 197–209 . 2021. 10.3233/AIC-210084. 239199144.
  5. L.C. Lamb, A.S. d'Avila Garcez, M.Gori, M.O.R. Prates, P.H.C. Avelar, M.Y. Vardi (2020). "Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective." CoRR abs/2003.00330 (2020)
  6. Hochreiter . Sepp . April 2022 . Toward a broad AI . Communications of the ACM . en . 65 . 4 . 56–57 . 10.1145/3512715 . 0001-0782.
  7. Web site: Neuro-Symbolic Artificial Intelligence . 2023-09-11 . people.cs.ksu.edu.
  8. Web site: Harper . Jelani . 2023-12-29 . AllegroGraph 8.0 Incorporates Neuro-Symbolic AI, a Pathway to AGI . 2024-06-13 . The New Stack . en-US.
  9. Web site: Neuro-Symbolic AI and Large Language Models Introduction AllegroGraph 8.1.1 . 2024-06-13 . franz.com.
  10. Web site: Franz Inc. Introduces AllegroGraph Cloud: A Managed Service for Neuro-Symbolic AI Knowledge Graphs . 2024-06-13 . Datanami.
  11. Li . Ziyang . Huang . Jiani . Naik . Mayur . 2023 . Scallop: A Language for Neurosymbolic Programming . cs.PL . 2304.04812.
  12. Web site: Model Induction Method for Explainable AI. USPTO. 2021-05-06.