Multi-modal Semantic Understanding with Contrastive Cross-modal Feature Alignment

Ming Zhang, Ke Chang, Yunfang Wu


Abstract
Multi-modal semantic understanding requires integrating information from different modalities to extract users’ real intention behind words. Most previous work applies a dual-encoder structure to separately encode image and text, but fails to learn cross-modal feature alignment, making it hard to achieve cross-modal deep information interaction. This paper proposes a novel CLIP-guided contrastive-learning-based architecture to perform multi-modal feature alignment, which projects the features derived from different modalities into a unified deep space. On multi-modal sarcasm detection (MMSD) and multi-modal sentiment analysis (MMSA) tasks, the experimental results show that our proposed model significantly outperforms several baselines, and our feature alignment strategy brings obvious performance gain over models with different aggregating methods and models even enriched with knowledge. More importantly, our model is simple to implement without using task-specific external knowledge, and thus can easily migrate to other multi-modal tasks. Our source codes are available at https://github.com/ChangKe123/CLFA.
Anthology ID:
2024.lrec-main.1042
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
11934–11943
Language:
URL:
https://aclanthology.org/2024.lrec-main.1042
DOI:
Bibkey:
Cite (ACL):
Ming Zhang, Ke Chang, and Yunfang Wu. 2024. Multi-modal Semantic Understanding with Contrastive Cross-modal Feature Alignment. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 11934–11943, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Multi-modal Semantic Understanding with Contrastive Cross-modal Feature Alignment (Zhang et al., LREC-COLING 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-main.1042.pdf