Bibliography (4):
MAE: Masked Autoencoders Are Scalable Vision Learners
Vision Transformer: An Image is Worth 16×16 Words: Transformers for Image Recognition at Scale
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Wikipedia Bibliography:
Instagram