“OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-To-Sequence Learning Framework”, Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, Hongxia Yang2022-02-07 (, ; similar)⁠:

In this work, we pursue an unified paradigm for multimodal pretraining to break the scaffolds of complex task/modality-specific customization. We propose OFA, an unified multimodal pretrained model that unifies modalities (ie. cross-modality, vision, language) and tasks (eg. image generation, visual grounding, image captioning, image classification, text generation, etc.) to a simple sequence-to-sequence learning framework based on the encoder-decoder architecture.

OFA performs pretraining and finetuning with task instructions and introduces no extra task-specific layers for finetuning.

Experimental results show that OFA achieves new state-of-the-arts on a series of multimodal tasks, including image captioning (COCO test CIDEr: 149.6), text-to-image generation (COCO test FID: 10.5), VQA (test-std accuracy: 80.02), SNLI-VE (test accuracy: 90.20), and referring expression comprehension (RefCOCO / RefCOCO+ / RefCOCOg test accuracy: 92.93 / 90.10 / 85.20). Through extensive analyses, we demonstrate that OFA reaches comparable performance with uni-modal pretrained models (eg. BERT, MAE, MoCo v3, SimCLR v2, etc.) in uni-modal tasks, including NLU, NLG, and image classification, and it effectively transfers to unseen tasks and domains.

Code shall be released soon at https://github.com/OFA-Sys/OFA.