November 13, 2025
Report
NeuroSymbolic Approaches as a Vector for Assured Artificial Intelligence
Abstract
The deployment of artificial intelligence systems in critical applications requires higher levels of assurance for safety, security, and interpretability. While neurosymbolic (NESY ) approaches combining neural networks with symbolic reasoning offer potential advantages for assured AI, existing differentiable neurosymbolic frameworks face significant limitations including computational overhead and performance constraints. This report investigates the ISED (InferSampleEstimateDescend) framework as an alternative approach that enables neurosymbolic learning without requiring endtoend differentiability. We evaluate ISED’s utility for geointelligence applications by comparing neurosymbolic models against standard neural networks on aircraft classification tasks using the RarePlanes and MTARSI imagery datasets. Our results demonstrate that while ISEDbased models achieve slightly lower accuracy (89.7% vs 92.1% on RarePlanes; 91.1% vs 92.5% on MTARSI), they provide critical explainability capabilities that enable tracing incorrect predictions back to specific attribute misclassifications. We also present an automated pipeline that generates both attributeclass mappings and neurosymbolic model architectures from natural language descriptions, significantly reducing the manual effort required for NESY model deployment. These findings suggest that ISED offers a promising direction for developing assured AI systems where interpretability and reasoning transparency are prioritized alongside performance.Published: November 13, 2025