BLOCK: Bilinear superdiagonal fusion for visual question answering and visual relationship detection

197Citations
Citations of this article
160Readers
Mendeley users who have this article in their library.

Abstract

Multimodal representation learning is gaining more and more interest within the deep learning community. While bilinear models provide an interesting framework to find subtle combination of modalities, their number of parameters grows quadratically with the input dimensions, making their practical implementation within classical deep learning pipelines challenging. In this paper, we introduce BLOCK, a new multimodal fusion based on the block-superdiagonal tensor decomposition. It leverages the notion of block-term ranks, which generalizes both concepts of rank and mode ranks for tensors, already used for multimodal fusion. It allows to define new ways for optimizing the tradeoff between the expressiveness and complexity of the fusion model, and is able to represent very fine interactions between modalities while maintaining powerful mono-modal representations. We demonstrate the practical interest of our fusion model by using BLOCK for two challenging tasks: Visual Question Answering (VQA) and Visual Relationship Detection (VRD), where we design end-to-end learnable architectures for representing relevant interactions between modalities. Through extensive experiments, we show that BLOCK compares favorably with respect to state-of-the-art multimodal fusion models for both VQA and VRD tasks. Our code is available at https://github.com/Cadene/block.bootstrap.pytorch.

References Powered by Scopus

Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations

3803Citations
N/AReaders
Get full text

VQA: Visual question answering

3789Citations
N/AReaders
Get full text

Analysis of individual differences in multidimensional scaling via an n-way generalization of "Eckart-Young" decomposition

3539Citations
N/AReaders
Get full text

Cited by Powered by Scopus

TensoRF: Tensorial Radiance Fields

479Citations
N/AReaders
Get full text

Multimodal Intelligence: Representation Learning, Information Fusion, and Applications

339Citations
N/AReaders
Get full text

Counterfactual samples synthesizing for robust visual question answering

286Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Ben-Younes, H., Cadene, R., Thome, N., & Cord, M. (2019). BLOCK: Bilinear superdiagonal fusion for visual question answering and visual relationship detection. In 33rd AAAI Conference on Artificial Intelligence, AAAI 2019, 31st Innovative Applications of Artificial Intelligence Conference, IAAI 2019 and the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019 (pp. 8102–8109). AAAI Press. https://doi.org/10.1609/aaai.v33i01.33018102

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 61

80%

Researcher 11

14%

Professor / Associate Prof. 3

4%

Lecturer / Post doc 1

1%

Readers' Discipline

Tooltip

Computer Science 68

85%

Engineering 9

11%

Mathematics 2

3%

Neuroscience 1

1%

Save time finding and organizing research with Mendeley

Sign up for free