“Linformer: Self-Attention With Linear Complexity”, Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, Hao Ma2020-06-08 (, ; similar)⁠:

Large transformer models have shown extraordinary success in achieving state-of-the-art results in many natural language processing applications. However, training and deploying these models can be prohibitively costly for long sequences, as the standard self-attention mechanism of the Transformer uses 𝒪(n2) time and space with respect to sequence length.

In this paper, we demonstrate that the self-attention mechanism can be approximated by a low-rank matrix. We further exploit this finding to propose a new self-attention mechanism, which reduces the overall self-attention complexity from 𝒪(n2) to 𝒪(n) in both time and space.

The resulting linear transformer, the Linformer, performs on par with standard Transformer models, while being much more memory-efficient and time-efficient.