“Swin Transformer V2: Scaling Up Capacity and Resolution”, Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo2021-11-18 (, ; backlinks; similar)⁠:

Large-scale NLP models have been shown to improve the performance on language tasks with no signs of saturation. They also demonstrate amazing few-shot capabilities like that of human beings. This paper aims to explore large-scale models in computer vision. We tackle 3 major issues in training and application of large vision models, including training instability, resolution gaps between pre-training and fine-tuning, and hunger on labeled data.

3 main techniques are proposed: (1) a residual-post-norm method combined with cosine attention to improve training stability; (2) A log-spaced continuous position bias method to effectively transfer models pre-trained using low-resolution images to downstream tasks with high-resolution inputs; (3) A self-supervised pre-training method, SimMIM, to reduce the needs of vast labeled images.

Through these techniques, this paper successfully trained a 3 billion-parameter Swin Transformer V2 model, which is the largest dense vision model to date, and makes it capable of training with images of up to 1,536×1,536 resolution. It set new performance records on 4 representative vision tasks, including ImageNet-V2 image classification, COCO object detection, ADE20K semantic segmentation, and Kinetics-400 video action classification. Also note our training is much more efficient than that in Google’s billion-level visual models, which consumes 40× less labeled data and 40× less training time.

Code is available at https://github.com/microsoft/Swin-Transformer.