Bibliography (9):

  1. CLIP: Connecting Text and Images: We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision. CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, similar to the ‘zero-shot’ capabilities of GPT-2 and GPT-3

  2. LVIS: A Dataset for Large Vocabulary Instance Segmentation

  3. R-CNN: Rich feature hierarchies for accurate object detection and semantic segmentation

  4. Deep Residual Learning for Image Recognition

  5. The PASCAL Visual Object Classes Homepage

  6. Microsoft COCO: Common Objects in Context