Integrating Symbolic Reasoning into Neural Networks: A Neuro-Symbolic Logic Programming Approach for Enhanced Explainability
DOI:
https://doi.org/10.71465/fair602Keywords:
Neuro-Symbolic AI, Differentiable Logic, Explainable AI, Deep Learning IntegrationAbstract
The dichotomy between sub-symbolic connectionist approaches and symbolic logic-based systems constitutes a fundamental divide in the history of artificial intelligence. While deep neural networks have achieved unprecedented success in perceptual tasks such as image recognition and natural language processing, they continue to suffer from a lack of interpretability and a tendency to fail in scenarios requiring rigorous logical consistent reasoning. Conversely, symbolic systems offer high explainability and verifiable reasoning chains but struggle with the noise and ambiguity inherent in real-world sensory data. This paper proposes a unified Neuro-Symbolic Logic Programming framework that integrates differentiable logic layers within deep neural architectures. By mapping logical predicates to continuous real-valued tensors and relaxing Boolean operators into differentiable functions, we enable end-to-end training of systems that possess both the learning capability of neural networks and the reasoning structure of logic programming. Our experimental results demonstrate that this hybrid approach not only matches state-of-the-art performance in complex reasoning tasks but also significantly outperforms baseline models in terms of explainability and data efficiency. The framework allows for the extraction of explicit logical rules from trained networks, providing a window into the decision-making process of the model and bridging the gap between data-driven learning and knowledge-based reasoning.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Arthur Hamilton, Claire Vance, Eleanor P. Wright (Author)

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.