Using Pre-Trained Language Models in an End-to-End Pipeline for Antithesis Detection

Ramona Kühn, Khouloud Saadi, Jelena Mitrović, Michael Granitzer


Abstract
Rhetorical figures play an important role in influencing readers and listeners. Some of these word constructs that deviate from the usual language structure are known to be persuasive – antithesis is one of them. This figure combines parallel phrases with opposite ideas or words to highlight a contradiction. By identifying this figure, persuasive actors can be better identified. For this task, we create an annotated German dataset for antithesis detection. The dataset consists of posts from a Telegram channel criticizing the COVID-19 politics in Germany. Furthermore, we propose a three-block pipeline approach to detect the figure antithesis using large language models. Our pipeline splits the text into phrases, identifies phrases with a syntactically parallel structure, and detects if these parallel phrase pairs present opposing ideas by fine-tuning the German ELECTRA model, a state-of-the-art deep learning model for the German language. Furthermore, we compare the results with multilingual BERT and German BERT. Our novel approach outperforms the state-of-the-art methods (F1-score of 50.43 %) for antithesis detection by achieving an F1-score of 65.11 %.
Anthology ID:
2024.lrec-main.1502
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
17310–17320
Language:
URL:
https://aclanthology.org/2024.lrec-main.1502
DOI:
Bibkey:
Cite (ACL):
Ramona Kühn, Khouloud Saadi, Jelena Mitrović, and Michael Granitzer. 2024. Using Pre-Trained Language Models in an End-to-End Pipeline for Antithesis Detection. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 17310–17320, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Using Pre-Trained Language Models in an End-to-End Pipeline for Antithesis Detection (Kühn et al., LREC-COLING 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-main.1502.pdf