-
Scaling Laws for Neural Language Models
-
A Constructive Prediction of the Generalization Error Across Scales
-
DALL·E 1: Creating Images from Text: We’ve trained a neural network called DALL·E that creates images from text captions for a wide range of concepts expressible in natural language
-
CLIP: Connecting Text and Images: We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision. CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, similar to the ‘zero-shot’ capabilities of GPT-2 and GPT-3
-
Language Models are Unsupervised Multitask Learners
-
Wikipedia Bibliography:
-
OpenAI