-
Replacing softmax with ReLU in Vision Transformers
-
Vision Transformer: An Image is Worth 16×16 Words: Transformers for Image Recognition at Scale
-
Pay Less Attention with Lightweight and Dynamic Convolutions
-
Attention Is All You Need
-
Layer Normalization
-