TY - GEN
T1 - Ask, attend and answer
T2 - 14th European Conference on Computer Vision, ECCV 2016
AU - Xu, Huijuan
AU - Saenko, Kate
N1 - Publisher Copyright:
© Springer International Publishing AG 2016.
PY - 2016
Y1 - 2016
N2 - We address the problem of Visual Question Answering (VQA), which requires joint image and language understanding to answer a question about a given photograph. Recent approaches have applied deep image captioning methods based on convolutional-recurrent networks to this problem, but have failed to model spatial inference. To remedy this, we propose a model we call the Spatial Memory Network and apply it to the VQA task. Memory networks are recurrent neural networks with an explicit attention mechanism that selects certain parts of the information stored in memory. Our Spatial Memory Network stores neuron activations from different spatial regions of the image in its memory, and uses attention to choose regions relevant for computing the answer. We propose a novel question-guided spatial attention architecture that looks for regions relevant to either individual words or the entire question, repeating the process over multiple recurrent steps, or “hops”. To better understand the inference process learned by the network, we design synthetic questions that specifically require spatial inference and visualize the network’s attention.We evaluate our model on two available visual question answering datasets and obtain improved results.
AB - We address the problem of Visual Question Answering (VQA), which requires joint image and language understanding to answer a question about a given photograph. Recent approaches have applied deep image captioning methods based on convolutional-recurrent networks to this problem, but have failed to model spatial inference. To remedy this, we propose a model we call the Spatial Memory Network and apply it to the VQA task. Memory networks are recurrent neural networks with an explicit attention mechanism that selects certain parts of the information stored in memory. Our Spatial Memory Network stores neuron activations from different spatial regions of the image in its memory, and uses attention to choose regions relevant for computing the answer. We propose a novel question-guided spatial attention architecture that looks for regions relevant to either individual words or the entire question, repeating the process over multiple recurrent steps, or “hops”. To better understand the inference process learned by the network, we design synthetic questions that specifically require spatial inference and visualize the network’s attention.We evaluate our model on two available visual question answering datasets and obtain improved results.
UR - http://www.scopus.com/inward/record.url?scp=84990052480&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84990052480&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-46478-7_28
DO - 10.1007/978-3-319-46478-7_28
M3 - Conference contribution
AN - SCOPUS:84990052480
SN - 9783319464770
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 451
EP - 466
BT - Computer Vision - 14th European Conference, ECCV 2016, Proceedings
A2 - Leibe, Bastian
A2 - Matas, Jiri
A2 - Sebe, Nicu
A2 - Welling, Max
PB - Springer Verlag
Y2 - 8 October 2016 through 16 October 2016
ER -