Two Papers Accepted at the XAI 2026 Conference

Mar 6, 2026·
Vittoria Vineis
Vittoria Vineis
Matteo Silvestri
Matteo Silvestri
Lorenzo Antonelli
Lorenzo Antonelli
Filippo Betello
Filippo Betello
Giuseppe Perelli
Giuseppe Perelli
Fabrizio Silvestri
Fabrizio Silvestri
Gabriele Tolomei
Gabriele Tolomei
· 2 min read

We are delighted to share that two papers from HERCOLE Lab have been accepted at the 4th World Conference on eXplainable Artificial Intelligence. This is a great recognition of our lab’s ongoing efforts to advance transparency, trust, and human-centered AI.

1. PONTE: Personalized Orchestration for Natural Language Trustworthy Explanations

Authors: Vittoria Vineis, Matteo Silvestri, Lorenzo Antonelli, Filippo Betello, and Gabriele Tolomei

This work introduces PONTE, a human-in-the-loop framework for generating personalized and trustworthy natural-language explanations for AI systems. Instead of relying on static prompts, PONTE models personalization as a closed-loop process, combining preference-aware generation with verification modules that enforce faithfulness, completeness, and stylistic alignment. Experiments and human evaluations show that this verification-refinement loop significantly improves the quality and reliability of AI-generated explanations across domains such as healthcare and finance.

Pre-print: https://arxiv.org/abs/2603.06485

2. Demystifying Sequential Recommendations: Counterfactual Explanations via Genetic Algorithms

Authors: Filippo Betello, Domiziano Scarcelli, Giuseppe Perelli, Fabrizio Silvestri, and Gabriele Tolomei

This paper proposes the first counterfactual explanation framework for Sequential Recommender Systems (SRSs). By leveraging a specialized genetic algorithm for discrete sequences, the method answers the question: “What minimal changes in a user’s interaction history would lead to different recommendations?” The work also shows that generating such explanations is NP-Complete, and demonstrates through extensive experiments that meaningful counterfactual explanations can be generated while preserving model fidelity.

Pre-print: https://arxiv.org/abs/2508.03606

Congratulations to all the authors for this excellent work and contribution to the field of Explainable AI.

Vittoria Vineis
Authors
Vittoria Vineis
PhD Student in Data Science
Matteo Silvestri
Authors
Matteo Silvestri
PhD Student in Computer Science
Lorenzo Antonelli
Authors
Lorenzo Antonelli
PhD Student in Data Science
Filippo Betello
Authors
Filippo Betello
PhD Student in Data Science
Giuseppe Perelli
Authors
Giuseppe Perelli
Associate Professor of Computer Science
Fabrizio Silvestri
Authors
Fabrizio Silvestri
Full Professor of Computer Science
Gabriele Tolomei
Authors
Gabriele Tolomei
Associate Professor of Computer Science