Efficient content-based sparse attention with routing transformers

329Citations
Citations of this article
376Readers
Mendeley users who have this article in their library.

Abstract

Self-attention has recently been adopted for a wide range of sequence modeling problems. Despite its effectiveness, self-attention suffers from quadratic computation and memory requirements with respect to sequence length. Successful approaches to reduce this complexity focused on attending to local sliding windows or a small set of locations independent of content. Our work proposes to learn dynamic sparse attention patterns that avoid allocating computation and memory to attend to content unrelated to the query of interest. This work builds upon two lines of research: It combines the modeling flexibility of prior work on content-based sparse attention with the efficiency gains from approaches based on local, temporal sparse attention. Our model, the Routing Transformer, endows self-attention with a sparse routing module based on online k-means while reducing the overall complexity of attention to O(n1.5 d) from O(n2 d) for sequence length n and hidden dimension d. We show that our model outperforms comparable sparse attention models on language modeling on Wikitext-103 (15.8 vs 18.3 perplexity), as well as on image generation on ImageNet-64 (3.43 vs 3.44 bits/dim) while using fewer self-attention layers. Additionally, we set a new state-of-the-art on the newly released PG-19 data-set, obtaining a test perplexity of 33.2 with a 22 layer Routing Transformer model trained on sequences of length 8192. We open-source the code for Routing Transformer in Tensorflow.1

References Powered by Scopus

Learning phrase representations using RNN encoder-decoder for statistical machine translation

11676Citations
N/AReaders
Get full text

Effective approaches to attention-based neural machine translation

4126Citations
N/AReaders
Get full text

Character-level language modeling with deeper self-attention

247Citations
N/AReaders
Get full text

Cited by Powered by Scopus

CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows

713Citations
N/AReaders
Get full text

A survey of transformers

657Citations
N/AReaders
Get full text

Pre-trained models: Past, present and future

586Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Roy, A., Saffar, M., Vaswani, A., & Grangier, D. (2021). Efficient content-based sparse attention with routing transformers. Transactions of the Association for Computational Linguistics, 9, 53–68. https://doi.org/10.1162/tacl_a_00353

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 101

59%

Researcher 60

35%

Professor / Associate Prof. 6

3%

Lecturer / Post doc 5

3%

Readers' Discipline

Tooltip

Computer Science 147

82%

Engineering 21

12%

Linguistics 7

4%

Neuroscience 4

2%

Article Metrics

Tooltip
Mentions
Blog Mentions: 1

Save time finding and organizing research with Mendeley

Sign up for free