“Sub-Linear Memory: How to Make Performers SLiM”, 2020-12-21 (; backlinks; similar):
The Transformer architecture has revolutionized deep learning on sequential data, becoming ubiquitous in state-of-the-art solutions for a wide variety of applications. Yet vanilla Transformers are notoriously resource-expensive, requiring 𝒪(L2) in serial time and memory as functions of input length L. Recent works proposed various linear self-attention mechanisms, scaling only as 𝒪(L) for serial computation.
We perform a thorough analysis of recent Transformer mechanisms with linear self-attention, Performers, in terms of overall computational complexity. We observe a remarkable computational flexibility: forward and backward propagation can be performed with no approximations using sublinear memory as a function of L (in addition to negligible storage for the input sequence), at a cost of greater time complexity in the parallel setting. In the extreme case, a Performer consumes only 𝒪(1) memory during training, and still requires 𝒪(L) time.
This discovered time-memory tradeoff can be used for training or, due to complete backward-compatibility, for fine-tuning on a low-memory device, eg. a smartphone or an earlier-generation GPU, thus contributing towards decentralized and democratized deep learning.