Deep Residual Learning for Image Recognition
https://ai.meta.com/research/publications/dino-self-distillation-with-no-labels/
VL-T5: Unifying Vision-and-Language Tasks via Text Generation
CLIP: Connecting Text and Images: We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision. CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, similar to the ‘zero-shot’ capabilities of GPT-2 and GPT-3
https://arxiv.org/abs/1809.11096
https://arxiv.org/abs/1812.04948
A Style-Based Generator Architecture for Generative Adversarial Networks