Exploring the Impact of Human Evaluator Group on Chat-Oriented Dialogue Evaluation

Sarah E. Finch, James D. Finch, Jinho D. Choi


Abstract
Human evaluation has been widely accepted as the standard for evaluating chat-oriented dialogue systems. However, there is a significant variation in previous work regarding who gets recruited as evaluators. Evaluator groups such as domain experts, university students, and crowdworkers have been used to assess and compare dialogue systems, although it is unclear to what extent the choice of an evaluator group can affect results. This paper analyzes the evaluator group impact on dialogue system evaluation by testing 4 state-of-the-art dialogue systems using 4 distinct evaluator groups. Our analysis reveals a robustness towards evaluator groups for Likert evaluations that is not seen for Pairwise, with only minor differences observed when changing evaluator groups. Furthermore, two notable limitations to this robustness are observed, which reveal discrepancies between evaluators with different levels of chatbot expertise and indicate that evaluator objectivity is beneficial for certain dialogue metrics.
Anthology ID:
2024.lrec-main.610
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
6966–6973
Language:
URL:
https://aclanthology.org/2024.lrec-main.610
DOI:
Bibkey:
Cite (ACL):
Sarah E. Finch, James D. Finch, and Jinho D. Choi. 2024. Exploring the Impact of Human Evaluator Group on Chat-Oriented Dialogue Evaluation. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 6966–6973, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Exploring the Impact of Human Evaluator Group on Chat-Oriented Dialogue Evaluation (Finch et al., LREC-COLING 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-main.610.pdf