Differential Attention for Visual Question Answering

Published in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018

Recommended citation: B.N. Patro and V.P. Namboodiri, “Differential Attention for Visual Question Answering”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Salt Lake City, Utah, June 2018. https://badripatro.github.io/DVQA/

Download paper here

In this paper we aim to answer questions based on images when provided with a dataset of question-answer pairs for a number of images during training. A number of methods have focused on solving this problem by using image based attention. This is done by focusing on a specific part of the image while answering the question. Humans also do so when solving this problem. However, the regions that the previous systems focus on are not correlated with the regions that humans focus on. The accuracy is limited due to this drawback. In this paper, we propose to solve this problem by using an exemplar based method. We obtain one or more supporting and opposing exemplars to obtain a differential attention region. This differential attention is closer to human attention than other image based attention methods. It also helps in obtaining improved accuracy when answering questions. The method is evaluated on challenging benchmark datasets. We perform better than other image based attention methods and are competitive with other state of the art methods that focus on both image and questions.

Recommended citation: B.N. Patro and V.P. Namboodiri, “Differential Attention for Visual Question Answering”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Salt Lake City, Utah, June 2018.