Neuro-symbolic AI

Source: Wikipedia, the free encyclopedia.

Neuro-symbolic AI is a type of

symbolic reasoning and efficient machine learning. Gary Marcus, argued, "We cannot construct rich cognitive models in an adequate, automated way without the triumvirate of hybrid architecture, rich prior knowledge, and sophisticated techniques for reasoning."[4] Further, "To build a robust, knowledge-driven approach to AI we must have the machinery of symbol manipulation in our toolkit. Too much useful knowledge is abstract to proceed without tools that represent and manipulate abstraction, and to date, the only known machinery that can manipulate such abstract knowledge reliably is the apparatus of symbol manipulation."[5]

Thinking Fast and Slow. It describes cognition as encompassing two components: System 1 is fast, reflexive, intuitive, and unconscious. System 2 is slower, step-by-step, and explicit. System 1 is used for pattern recognition. System 2 handles planning, deduction, and deliberative thinking. In this view, deep learning best handles the first kind of cognition while symbolic reasoning best handles the second kind. Both are needed for a robust, reliable AI that can learn, reason, and interact with humans to accept advice and answer questions. Such dual-process models with explicit references to the two contrasting systems have been worked on since the 1990s, both in AI and in Cognitive Science, by multiple researchers.[9]

Approaches

Approaches for integration are diverse. Henry Kautz's taxonomy of neuro-symbolic architectures,[10] along with some examples, follows:

These categories are not exhaustive, as they do not consider multi-agent systems. In 2005, Bader and Hitzler presented a more fine-grained categorization that considered, e.g., whether the use of symbols included logic and if it did, whether the logic was propositional or first-order logic.[14] The 2005 categorization and Kautz's taxonomy above are compared and contrasted in a 2021 article.[10] Recently, Sepp Hochreiter argued that Graph Neural Networks "...are the predominant models of neural-symbolic computing"[15] since "[t]hey describe the properties of molecules, simulate social networks, or predict future states in physical and engineering applications with particle-particle interactions."[16]

Artificial general intelligence

Gary Marcus argues that "...hybrid architectures that combine learning and symbol manipulation are necessary for robust intelligence, but not sufficient",[17] and that there are

...four cognitive prerequisites for building robust artificial intelligence:

  • hybrid architectures that combine large-scale learning with the representational and computational powers of symbol manipulation,
  • large-scale knowledge bases—likely leveraging innate frameworks—that incorporate symbolic knowledge along with other forms of knowledge,
  • reasoning mechanisms capable of leveraging those knowledge bases in tractable ways, and
  • rich
    knowledge bases.[18]

This echoes earlier calls for hybrid models as early as the 1990s.[19][20]

History

Garcez and Lamb described research in this area as ongoing at least since the 1990s.[21][22]

A series of workshops on neuro-symbolic AI has been held annually since 2005 Neuro-Symbolic Artificial Intelligence.[23] In the early 1990s, an initial set of workshops on this topic were organized.[19]

Research

Key research questions remain,[24] such as:

  • What is the best way to integrate neural and symbolic architectures?
  • How should symbolic structures be represented within neural networks and extracted from them?
  • How should common-sense knowledge be learned and reasoned about?
  • How can abstract knowledge that is hard to encode logically be handled?

Implementations

Implementations of neuro-symbolic approaches include:

  • Scallop: a language based on Datalog that supports differentiable logical and relational reasoning. Scallop can be integrated in Python and with a PyTorch learning module.[25]
  • Logic Tensor Networks: encode logical formulas as neural networks and simultaneously learn term encodings, term weights, and formula weights.
  • DeepProbLog: combines neural networks with the probabilistic reasoning of ProbLog.
  • SymbolicAI: a compositional differentiable programming library.
  • Explainable Neural Networks (XNNs): combine neural networks with symbolic hypergraphs and trained using a mixture of backpropagation and symbolic learning called induction.[26]

Citations

  1. ^ Valiant 2008.
  2. ^ Garcez et al. 2015.
  3. ISSN 1611-2482. {{cite book}}: |journal= ignored (help
    )
  4. ^ Marcus 2020, p. 44.
  5. ^ Marcus & Davis 2019, p. 17.
  6. ^ Kautz 2020.
  7. ^ Rossi 2022.
  8. ^ Selman 2022.
  9. ^ Sun 1995.
  10. ^
    S2CID 239199144
    .
  11. ^ Mao et al. 2019.
  12. . Retrieved 2022-08-06.
  13. ].
  14. ^ Bader & Hitzler 2005.
  15. ^ L.C. Lamb, A.S. d'Avila Garcez, M.Gori, M.O.R. Prates, P.H.C. Avelar, M.Y. Vardi (2020). "Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective." CoRR abs/2003.00330 (2020)
  16. ISSN 0001-0782
    .
  17. ^ Marcus 2020, p. 50.
  18. ^ Marcus 2020, p. 48.
  19. ^ a b Sun & Bookman 1994.
  20. ^ Honavar 1995.
  21. ^ Garcez & Lamb 2020, p. 2.
  22. ^ Garcez et al. 2002.
  23. ^ "Neuro-Symbolic Artificial Intelligence". people.cs.ksu.edu. Retrieved 2023-09-11.
  24. ^ Sun 2001.
  25. ].
  26. ^ "Model Induction Method for Explainable AI". USPTO. 2021-05-06.

References

See also

  • Symbolic AI
  • Connectionist AI
  • Hybrid intelligent systems

External links