Bibliography (4):
Attention Is All You Need
Adaptive Attention Span in Transformers
Efficient Content-Based Sparse Attention with Routing Transformers
Compressive Transformers for Long-Range Sequence Modeling