Neuro-Symbolic Integration Techniques for Scalable Reasoning in Next-Generation Intelligent Systems

Authors

  • Alexandre Florian Intelligent Systems Architect, France Author
  • Thibault Jordan Scalable Reasoning Specialist, Germany Author

Keywords:

Neuro-symbolic systems, scalable reasoning, symbolic AI, neural networks, interpretability, cognitive architectures, knowledge representation, integration

Abstract

As intelligent systems evolve to handle increasingly complex reasoning tasks, the integration of symbolic and sub-symbolic (neural) methods has emerged as a critical paradigm. Neuro-symbolic systems aim to combine the robust generalization capabilities of neural networks with the interpretability, compositionality, and logic-based reasoning of symbolic systems. This paper explores the latest techniques in neuro-symbolic integration, emphasizing their applicability to scalable reasoning in next-generation AI systems. We examine current architectures, propose future integration strategies, and evaluate open challenges and performance trade-offs through comparative analyses. The study also outlines key innovations facilitating scalable inference across diverse cognitive tasks, and discusses the significance of neuro-symbolic systems in ensuring transparency, efficiency, and generalization in AI.

References

1. Besold, T. R., De Raedt, L., Bader, S., Garcez, A., Lamb, L. C., d'Avila Garcez, A. S., & Hitzler, P. (2017). Neural-symbolic learning and reasoning: A survey and interpretation. KI - Künstliche Intelligenz, 31(1), 1–9.

2. Garcez, A. S., Lamb, L. C., & Gabbay, D. M. (2019). Neurosymbolic AI: The state of the art. Logic Journal of IGPL, 27(5), 622–660.

3. Rocktäschel, T., & Riedel, S. (2017). End-to-end differentiable proving. Advances in Neural Information Processing Systems, 30(NIPS 2017), 3788–3800.

4. Cohen, W. W., Yang, F., & Weld, D. S. (2016). Tensorlog: A differentiable deductive database. arXiv preprint, arXiv:1605.06523.

5. Serafini, L., & Garcez, A. S. (2016). Logic tensor networks: Deep learning and logical reasoning from data and knowledge. arXiv preprint, arXiv:1606.04422.

6. Manhaeve, R., Dumancic, S., Kimmig, A., Demeester, T., & Raedt, L. D. (2018). DeepProbLog: Neural probabilistic logic programming. Advances in Neural Information Processing Systems, 31(NIPS 2018).

7. Selsam, D., Lamm, M., & Barrett, C. (2018). Learning a SAT solver from single-bit supervision. International Conference on Learning Representations (ICLR).

8. d’Avila Garcez, A. S., & Lamb, L. C. (2020). Neurosymbolic AI: The third wave. AI Magazine, 41(3), 88–98.

9. Dong, H., Mao, J., Lin, T., Wang, C., Li, L., & Zhou, D. (2019). Neural logic machines. International Conference on Learning Representations.

10. Muggleton, S. H., & De Raedt, L. (1994). Inductive logic programming: Theory and methods. The Journal of Logic Programming, 19–20, 629–679.

11. Hölldobler, S., & Kalinke, Y. (1994). Toward a new massively parallel computational model for logic programming. Neurocomputing, 6(2), 207–224.

12. Bader, S., & Hitzler, P. (2005). Dimensions of neural-symbolic integration—A structured survey. arXiv preprint, arXiv:cs/0511042.

13. Marcus, G. (2020). The next decade in AI: Why common sense matters. Communications of the ACM, 63(1), 27–29.

14. Bengio, Y., Deleu, T., Rahaman, N., Ke, N. R., Lachapelle, S., Bilaniuk, O., Goyal, A., & Pal, C. (2019). A meta-transfer objective for learning to disentangle causal mechanisms. International Conference on Learning Representations.

15. Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2017). Building machines that learn and think like people. Behavioral and Brain Sciences, 40(e253), 1–72.

Downloads

Published

2025-03-25

How to Cite

Neuro-Symbolic Integration Techniques for Scalable Reasoning in Next-Generation Intelligent Systems. (2025). International Journal of Computing Science and Systems (IJCSS), 6(1), 1-6. https://ijcss.com/index.php/about/article/view/IJCSS_06012025