ā€œOne Model, Multiple Modalities: A Sparsely Activated Approach for Text, Sound, Image, Video and Codeā€, Yong Dai, Duyu Tang, Liangxin Liu, Minghuan Tan, Cong Zhou, Jingquan Wang, Zhangyin Feng, Fan Zhang, Xueyu Hu, Shuming Shi2022-05-12 ()⁠:

People perceive the world with multiple senses (eg. through hearing sounds, reading words, and seeing objects). However, most existing AI systems only process an individual modality. This paper presents an approach that excels at handling multiple modalities of information with a single model. In our ā€œSkillNetā€ model, different parts of the parameters are specialized for processing different modalities. Unlike traditional dense models that always activate all the model parameters, our model sparsely activates parts of the parameters whose skills are relevant to the task. Such model design enables SkillNet to learn skills in a more interpretable way.

We develop our model for 5 modalities including text, image, sound, video, and code. Results show that SkillNet performs comparably to 5 modality-specific fine-tuned models. Moreover, our model supports self-supervised pretraining with the same sparsely activated way, resulting in better initialized parameters for different modalities.

We find that pretraining improves the performance of SkillNet on 5 modalities, on par with or even better than baselines with modality-specific pretraining. On the task of Chinese text-to-image retrieval, our final system achieves higher accuracy than existing leading systems including WukongViT-B and Wenlan 2.0 while using a lesser number of activated parameters.