Evaluating the representational hub of language and vision models

10Citations
Citations of this article
88Readers
Mendeley users who have this article in their library.

Abstract

The multimodal models used in the emerging field at the intersection of computational linguistics and computer vision implement the bottom-up processing of the “Hub and Spoke” architecture proposed in cognitive science to represent how the brain processes and combines multi-sensory inputs. In particular, the Hub is implemented as a neural network encoder. We investigate the effect on this encoder of various vision-and-language tasks proposed in the literature: visual question answering, visual reference resolution, and visually grounded dialogue. To measure the quality of the representations learned by the encoder, we use two kinds of analyses. First, we evaluate the encoder pre-trained on the different vision-and-language tasks on an existing diagnostic task designed to assess multimodal semantic understanding. Second, we carry out a battery of analyses aimed at studying how the encoder merges and exploits the two modalities.

References Powered by Scopus

Deep residual learning for image recognition

174356Citations
N/AReaders
Get full text

VQA: Visual question answering

3730Citations
N/AReaders
Get full text

CNN features off-the-shelf: An astounding baseline for recognition

3446Citations
N/AReaders
Get full text

Cited by Powered by Scopus

VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena

41Citations
N/AReaders
Get full text

ReFormer: The Relational Transformer for Image Captioning

39Citations
N/AReaders
Get full text

Neural Natural Language Generation: A Survey on Multilinguality, Multimodality, Controllability and Learning

35Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Shekhar, R., Takmaz, E., Fernández, R., & Bernardi, R. (2019). Evaluating the representational hub of language and vision models. In IWCS 2019 - Proceedings of the 13th International Conference on Computational Semantics - Long Papers (pp. 211–222). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w19-0418

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 29

67%

Researcher 9

21%

Professor / Associate Prof. 3

7%

Lecturer / Post doc 2

5%

Readers' Discipline

Tooltip

Computer Science 37

77%

Linguistics 6

13%

Engineering 3

6%

Business, Management and Accounting 2

4%

Save time finding and organizing research with Mendeley

Sign up for free