Bibliography (4):
Attention Is All You Need
Vision Transformer: An Image is Worth 16×16 Words: Transformers for Image Recognition at Scale
Pay Attention to MLPs
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding