Detecting Conceptual Abstraction in LLMs

Michaela Regneri, Alhassan Abdelhalim, Soeren Laue


Abstract
We show a novel approach to detecting noun abstraction within a large language model (LLM). Starting from a psychologically motivated set of noun pairs in taxonomic relationships, we instantiate surface patterns indicating hypernymy and analyze the attention matrices produced by BERT. We compare the results to two sets of counterfactuals and show that we can detect hypernymy in the abstraction mechanism, which cannot solely be related to the distributional similarity of noun pairs. Our findings are a first step towards the explainability of conceptual abstraction in LLMs.
Anthology ID:
2024.lrec-main.420
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
4697–4704
Language:
URL:
https://aclanthology.org/2024.lrec-main.420
DOI:
Bibkey:
Cite (ACL):
Michaela Regneri, Alhassan Abdelhalim, and Soeren Laue. 2024. Detecting Conceptual Abstraction in LLMs. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 4697–4704, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Detecting Conceptual Abstraction in LLMs (Regneri et al., LREC-COLING 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-main.420.pdf
Optional supplementary material:
 2024.lrec-main.420.OptionalSupplementaryMaterial.zip