“M3P: Learning Universal Representations via Multitask Multilingual Multimodal Pre-Training”, 2020-06-04 (; similar):
We present M3P, a Multitask Multilingual Multimodal Pre-trained model that combines multilingual pre-training and multimodal pre-training into a unified framework via multitask pre-training.
Our goal is to learn universal representations that can map objects occurring in different modalities or texts expressed in different languages into a common semantic space. In addition, to explicitly encourage fine-grained alignment between images and non-English languages, we also propose Multimodal Code-switched Training (MCT) to combine monolingual pre-training and multimodal pre-training via a code-switch strategy.
Experiments are performed on the multilingual image retrieval task across two benchmark datasets, including MS COCO and Multi30K.
M3P can achieve comparable results for English and new state-of-the-art results for non-English languages.