Dive into the Chasm: Probing the Gap between In- and Cross-Topic Generalization

Andreas Waldis, Yufang Hou, Iryna Gurevych


Abstract
Pre-trained language models (PLMs) perform well in In-Topic setups, where training and testing data come from the same topics. However, they face challenges in Cross-Topic scenarios where testing data is derived from distinct topics. This paper analyzes various PLMs with three probing-based experiments to better understand the reasons behind such generalization gaps. For the first time, we demonstrate that the extent of these generalization gaps and the sensitivity to token-level interventions vary significantly across PLMs. By evaluating large language models (LLMs), we show the usefulness of our analysis for these recent models. Overall, we observe diverse pre-training objectives and architectural regularization contribute to more robust PLMs and mitigate generalization gaps. Our research contributes to a deeper understanding and comparison of language models across different generalization scenarios.
Anthology ID:
2024.findings-eacl.146
Volume:
Findings of the Association for Computational Linguistics: EACL 2024
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2197–2214
Language:
URL:
https://aclanthology.org/2024.findings-eacl.146
DOI:
Bibkey:
Cite (ACL):
Andreas Waldis, Yufang Hou, and Iryna Gurevych. 2024. Dive into the Chasm: Probing the Gap between In- and Cross-Topic Generalization. In Findings of the Association for Computational Linguistics: EACL 2024, pages 2197–2214, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
Dive into the Chasm: Probing the Gap between In- and Cross-Topic Generalization (Waldis et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-eacl.146.pdf
Video:
 https://aclanthology.org/2024.findings-eacl.146.mp4