“Flamingo: a Visual Language Model for Few-Shot Learning”, 2022-04-29 ():
[blog; reproducibility challenges] Building models that can be rapidly adapted to numerous tasks using only a handful of annotated examples is an open challenge for multimodal machine learning research. We introduce Flamingo 🦩, a family of Visual Language Models (VLM) with this ability [cf. MAGMA]. Flamingo models include key architectural innovations to: (1) bridge powerful pretrained vision-only and language-only models, (2) handle sequences of arbitrarily interleaved visual and textual data, and (3) seamlessly ingest images or videos as inputs. Thanks to their flexibility, Flamingo models can be trained on large-scale multimodal web corpora containing arbitrarily interleaved text and images, which is key to endow them with in-context few-shot learning capabilities.
We perform a thorough evaluation of the proposed Flamingo models, exploring and measuring their ability to rapidly adapt to a variety of image and video understanding benchmarks. These include open-ended tasks such as visual question-answering, where the model is prompted with a question which it has to answer, captioning tasks, which evaluate the ability to describe a scene or an event, and close-ended tasks such as multiple choice visual question-answering. For tasks lying anywhere on this spectrum, we demonstrate that a single Flamingo model can achieve a new state-of-the-art for few-shot learning, simply by prompting the model with task-specific examples. On many of these benchmarks, Flamingo actually surpasses the performance of models that are fine-tuned on thousands of times more task-specific data.
…In practice, Flamingo fuses large language models with powerful visual representations—each separately pre-trained and frozen—by adding novel architecture components in between. Then it is trained on a mixture of complementary large-scale multimodal data coming only from the web, without using any data annotated for machine learning purposes. Following this method, we start from Chinchilla, our recently introduced compute-optimal 70b parameter language model, to train our final Flamingo model, an 80b parameter VLM. After this training is done, Flamingo can be directly adapted to vision tasks via simple few-shot learning without any additional task-specific tuning.