Towards a Framework for Evaluating Explanations in Automated Fact Verification

Neema Kotonya, Francesca Toni


Abstract
As deep neural models in NLP become more complex, and as a consequence opaque, the necessity to interpret them becomes greater. A burgeoning interest has emerged in rationalizing explanations to provide short and coherent justifications for predictions. In this position paper, we advocate for a formal framework for key concepts and properties about rationalizing explanations to support their evaluation systematically. We also outline one such formal framework, tailored to rationalizing explanations of increasingly complex structures, from free-form explanations to deductive explanations, to argumentative explanations (with the richest structure). Focusing on the automated fact verification task, we provide illustrations of the use and usefulness of our formalization for evaluating explanations, tailored to their varying structures.
Anthology ID:
2024.lrec-main.1422
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
16364–16377
Language:
URL:
https://aclanthology.org/2024.lrec-main.1422
DOI:
Bibkey:
Cite (ACL):
Neema Kotonya and Francesca Toni. 2024. Towards a Framework for Evaluating Explanations in Automated Fact Verification. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 16364–16377, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Towards a Framework for Evaluating Explanations in Automated Fact Verification (Kotonya & Toni, LREC-COLING 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-main.1422.pdf