https://www.youtube.com/watch?v=WbaVvlgxbl4
Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints
T5: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
CLIP: Connecting Text and Images: We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision. CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, similar to the ‘zero-shot’ capabilities of GPT-2 and GPT-3
https://deepimagination.cc/eDiff-I/
https://openai.com/dall-e-2
Stable Diffusion Public Release
Imagen: Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding
https://parti.research.google/
Microsoft COCO: Common Objects in Context
http://visualgenome.org/
CDM: Cascaded Diffusion Models for High Fidelity Image Generation