Our paper "Natural Language Counterfactual Explanations for Graphs Using Large Language Models" has been accepted at AISTATS 2025!
Understanding Graph Neural Networks (GNNs) remains a challenging task, especially when it comes to interpreting their predictions. Counterfactual explanations provide an effective way to answer “what-if” questions—offering insights into how small changes in input data can lead to different model outcomes. However, traditional counterfactual methods often generate highly technical explanations, making them inaccessible to non-expert users.
💡 What does our work propose?
We introduce a novel approach that leverages open-source Large Language Models (LLMs) to transform counterfactual explanations for GNNs into natural language descriptions. This allows for:
- More human-readable and intuitive explanations
- Improved accessibility for non-expert users
- Better transparency in critical applications, such as fraud detection, financial decision-making, and healthcare
📊 Our method was evaluated using state-of-the-art counterfactual explainers and multiple graph datasets, demonstrating its effectiveness through both novel evaluation metrics and human assessments. We believe this work is a step toward making AI explainability more interpretable and actionable! 🚀
🔗 The preprint is available at the following link