Bibliography (10):

  1. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

  2. Language Models are Unsupervised Multitask Learners

  3. iGPT: Generative Pretraining from Pixels

  4. Image GPT (iGPT): We find that, just as a large transformer model trained on language can generate coherent text, the same exact model trained on pixel sequences can generate coherent image completions and samples

  5. ImageNet Large Scale Visual Recognition Challenge

  6. CIFAR-10 and CIFAR-100 Datasets

  7. STL-10 Dataset

  8. The Bitter Lesson