CLIP: Connecting Text and Images: We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision. CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, similar to the ‘zero-shot’ capabilities of GPT-2 and GPT-3
ALIGN: Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision
https://paperswithcode.com/dataset/flickr30k
LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs
Deep Residual Learning for Image Recognition
Vision Transformer: An Image is Worth 16×16 Words: Transformers for Image Recognition at Scale
https://wukong-dataset.github.io/wukong-dataset/