Clarifying Completions: Evaluating How LLMs Respond to Incomplete Questions

Angus Addlesee, Oliver Lemon, Arash Eshghi


Abstract
People understand and produce language incrementally on a word by word basis. This gives rise to many characteristic conversational phenomena including long mid-sentence pauses that are followed by incremental clarification requests (iCRs) intended to recover the rest of the truncated turn (see Fig. 1; (A), (B), (C)). The ability to generate iCRs is important in natural conversational AI systems, and crucial to their accessibility to users with memory impairment. In this paper, we collect, release and analyse SLUICE-CR: a large corpus of 3000 human produced iCRs. We then use this corpus to probe the incremental processing capability of a number of state of the art LLMs by evaluating the quality of the model’s generated iCRs in response to incomplete questions. Our evaluations show that the ability to generate contextually appropriate iCRs only emerges at larger LLM sizes, and only when prompted with example iCRs from our corpus. They also indicate that autoregressive LMs are, in principle, able to both understand and generate language incrementally.
Anthology ID:
2024.lrec-main.288
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
3242–3249
Language:
URL:
https://aclanthology.org/2024.lrec-main.288
DOI:
Bibkey:
Cite (ACL):
Angus Addlesee, Oliver Lemon, and Arash Eshghi. 2024. Clarifying Completions: Evaluating How LLMs Respond to Incomplete Questions. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 3242–3249, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Clarifying Completions: Evaluating How LLMs Respond to Incomplete Questions (Addlesee et al., LREC-COLING 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-main.288.pdf