ImageNet Large Scale Visual Recognition Challenge
https://laion.ai/blog/coca/
https://x.com/wightmanr/status/1621918875238670337
Contrastive Representation Learning: A Framework and Review
CLIP: Connecting Text and Images: We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision. CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, similar to the ‘zero-shot’ capabilities of GPT-2 and GPT-3
‘end-to-end’ directory
Microsoft COCO: Common Objects in Context
https://paperswithcode.com/dataset/flickr30k
MSR-VTT: A Large Video Description Dataset for Bridging Video and Language
nocaps: novel object captioning at scale