Bibliography (13):

  1. From Vision to Language: Semi-Supervised Learning in Action…at Scale

  2. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

  3. ImageNet Large Scale Visual Recognition Challenge

  4. Exploring the Limits of Weakly Supervised Pretraining

  5. ImageNet-A: Natural Adversarial Examples

  6. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations

  7. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks

  8. RandAugment: Practical automated data augmentation with a reduced search space

  9. https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet

  10. https://github.com/google-research/noisystudent