CLEVR-POC: Reasoning-Intensive Visual Question Answering in Partially Observable Environments

Savitha Sam Abraham, Marjan Alirezaie, Luc de Raedt


Abstract
The integration of learning and reasoning is high on the research agenda in AI. Nevertheless, there is only a little attention to using existing background knowledge for reasoning about partially observed scenes to answer questions about the scene. Yet, we as humans use such knowledge frequently to infer plausible answers to visual questions (by eliminating all inconsistent ones). Such knowledge often comes in the form of constraints about objects and it tends to be highly domain or environment specific. We contribute a novel benchmark called CLEVR-POC for reasoning-intensive visual question answering (VQA) in partially observable environments under constraints. In CLEVR-POC, knowledge in the form of logical constraints needs to be leveraged in order to generate plausible answers to questions about a hidden object in a given partial scene. For instance, if one has the knowledge that all cups are colored either red, green or blue and that there is only one green cup, it becomes possible to deduce the color of an occluded cup as either red or blue, provided that all other cups, including the green one, are observed. Through experiments we observe that the performance of pre-trained vision language models like CLIP (approx. 22%) and a large language model (LLM) like GPT-4 (approx. 46%) on CLEVR-POC are not satisfactory, ascertaining the necessity for frameworks that can handle reasoning-intensive tasks where environment-specific background knowledge is available and crucial. Furthermore, our demonstration illustrates that a neuro-symbolic model, which integrates an LLM like GPT-4 with a visual perception network and a formal logical reasoner, exhibits exceptional performance on CLEVR-POC.
Anthology ID:
2024.lrec-main.293
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
3297–3313
Language:
URL:
https://aclanthology.org/2024.lrec-main.293
DOI:
Bibkey:
Cite (ACL):
Savitha Sam Abraham, Marjan Alirezaie, and Luc de Raedt. 2024. CLEVR-POC: Reasoning-Intensive Visual Question Answering in Partially Observable Environments. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 3297–3313, Torino, Italia. ELRA and ICCL.
Cite (Informal):
CLEVR-POC: Reasoning-Intensive Visual Question Answering in Partially Observable Environments (Sam Abraham et al., LREC-COLING 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-main.293.pdf