“ConvNets Match Vision Transformers at Scale”, Samuel L. Smith, Andrew Brock, Leonard Berrada, Soham De2023-10-25 (, )⁠:

Many researchers believe that ConvNets perform well on small or moderately sized datasets, but are not competitive with Vision Transformers when given access to datasets on the web-scale.

We challenge this belief by evaluating a performant ConvNet architecture pre-trained on JFT-4B, a large labeled dataset of images often used for training foundation models. We consider pre-training compute budgets between 0.4k and 110k TPU-v4 core compute hours, and train a series of networks of increasing depth and width from the NFNet model family.

We observe a log-log scaling law between held out loss and compute budget. After fine-tuning on ImageNet, NFNets match the reported performance of Vision Transformers with comparable compute budgets. Our strongest fine-tuned model achieves a Top-1 accuracy of 90.4%.