Bibliography (7):

  1. CLIP: Connecting Text and Images: We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision. CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, similar to the ‘zero-shot’ capabilities of GPT-2 and GPT-3

  2. Vision Transformer: An Image is Worth 16×16 Words: Transformers for Image Recognition at Scale

  3. Training data-efficient image transformers & distillation through attention

  4. CoAtNet: Marrying Convolution and Attention for All Data Sizes

  5. ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases

  6. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows

  7. Wikipedia Bibliography:

    1. Convolutional neural network