Bibliography (4):

  1. MAE: Masked Autoencoders Are Scalable Vision Learners

  2. Vision Transformer: An Image is Worth 16×16 Words: Transformers for Image Recognition at Scale

  3. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

  4. Wikipedia Bibliography:

    1. Instagram