Bibliography (19):

  1. https://github.com/facebookresearch/ImageBind

  2. https://imagebind.metademolab.com/

  3. https://dl.fbaipublicfiles.com/imagebind/imagebind_video.mp4

  4. https://ai.meta.com/blog/imagebind-six-modalities-binding-ai/

  5. CLIP: Connecting Text and Images: We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision. CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, similar to the ‘zero-shot’ capabilities of GPT-2 and GPT-3

  6. Hierarchical Text-Conditional Image Generation with CLIP Latents

  7. https://www.karolpiczak.com/papers/Piczak2015-ESC-Dataset.pdf

  8. https://mtg.upf.edu/system/files/publications/Font-Roma-Serra-ACMM-2013.pdf

  9. https://aclanthology.org/N19-1011/

  10. InfoNCE: Representation Learning with Contrastive Predictive Coding (CPC)

  11. Detecting Twenty-thousand Classes using Image-level Supervision

  12. Vision Transformer: An Image is Worth 16×16 Words: Transformers for Image Recognition at Scale

  13. Reproducible scaling laws for contrastive language-image learning