“Transformer-QL: A Step Towards Making Transformer Network Quadratically Large”, Suvadeep Hajra2020-09-28 (; backlinks; similar)⁠:

In this paper, we propose a novel transformer architecture in which the context length (the number of past tokens on which the output states depend) can grow at best quadratically with the memory and computational usage.

Transformer networks have shown outstanding performance on many natural language processing tasks. However the context length (the number of previous tokens on which the output states depend) of a Transformer network grows at best linearly with the memory and computational power used. This limitation prevents a transformer network to have very long context in a resource limited application.

In this work, we propose a class of transformer networks, namely Transformer-QL (Quadratically Large), in which, the context length can grow at best quadratically with the memory and computational power used. We have empirically evaluated a Transformer-QL model in 3 long range language modeling datasets. The results show that Transformer-QL can provide substantial improvements over other state-of-the-art networks.

[Keywords: deep learning, language model, Transformer network, multi-scale Transformer network, natural language processing, Transformer-XL]