Analyzing Large Language Models’ Capability in Location Prediction

Zhaomin Xiao, Eduardo Blanco, Yan Huang


Abstract
In this paper, we investigate and evaluate large language models’ capability in location prediction. We present experimental results with four models—FLAN-T5, FLAN-UL2, FLAN-Alpaca, and ChatGPT—in various instruction finetuning and exemplar settings. We analyze whether taking into account the context—tweets published before and after the tweet mentioning a location—is beneficial. Additionally, we conduct an ablation study to explore whether instruction modification is beneficial. Lastly, our qualitative analysis sheds light on the errors made by the best-performing model.
Anthology ID:
2024.lrec-main.85
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
951–958
Language:
URL:
https://aclanthology.org/2024.lrec-main.85
DOI:
Bibkey:
Cite (ACL):
Zhaomin Xiao, Eduardo Blanco, and Yan Huang. 2024. Analyzing Large Language Models’ Capability in Location Prediction. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 951–958, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Analyzing Large Language Models’ Capability in Location Prediction (Xiao et al., LREC-COLING 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-main.85.pdf