Schroedinger’s Threshold: When the AUC Doesn’t Predict Accuracy

Juri Opitz


Abstract
The Area Under Curve measure (AUC) seems apt to evaluate and compare diverse models, possibly without calibration. An important example of AUC application is the evaluation and benchmarking of models that predict faithfulness of generated text. But we show that the AUC yields an academic and optimistic notion of accuracy that can misalign with the actual accuracy observed in application, yielding significant changes in benchmark rankings. To paint a more realistic picture of downstream model performance (and prepare it for actual application), we explore different calibration modes, testing calibration data and method.
Anthology ID:
2024.lrec-main.1255
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
14400–14406
Language:
URL:
https://aclanthology.org/2024.lrec-main.1255
DOI:
Bibkey:
Cite (ACL):
Juri Opitz. 2024. Schroedinger’s Threshold: When the AUC Doesn’t Predict Accuracy. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 14400–14406, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Schroedinger’s Threshold: When the AUC Doesn’t Predict Accuracy (Opitz, LREC-COLING 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-main.1255.pdf