Bibliography (12):

  1. CLIP: Learning Transferable Visual Models From Natural Language Supervision

  2. Language Models are Unsupervised Multitask Learners

  3. GPT-3: Language Models are Few-Shot Learners

  4. Deep Residual Learning for Image Recognition

  5. ImageNet Large Scale Visual Recognition Challenge

  6. CLIP: Connecting Text and Images: We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision. CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, similar to the ‘zero-shot’ capabilities of GPT-2 and GPT-3

  7. AudioCLIP: Extending CLIP to Image, Text and Audio

  8. [P] List of Sites/programs/projects That Use OpenAI’s CLIP Neural Network for Steering Image/video Creation to Match a Text Description

  9. Alien Dreams: An Emerging Art Scene

  10. New AI Tools CLIP+VQ-GAN Can Create Impressive Works of Art Based on Just a Few Words of Input