“Building Machine Translation Systems for the Next Thousand Languages”, Ankur Bapna, Isaac Caswell, Julia Kreutzer, Orhan Firat, Daan van Esch, Aditya Siddhant, Mengmeng Niu, Pallavi Baljekar, Xavier Garcia, Wolfgang Macherey, Theresa Breiner, Vera Axelrod, Jason Riesa, Yuan Cao, Mia Xu Chen, Klaus Macherey, Maxim Krikun, Pidong Wang, Alexander Gutkin, Apurva Shah, Yanping Huang, Zhifeng Chen, Yonghui Wu, Macduff Hughes2022-05-09 (, , )⁠:

In this paper we share findings from our effort to build practical machine translation (MT) systems capable of translating across over 1,000 languages.

We describe results in 3 research domains: (1) Building clean, web-mined datasets for 1500+ languages by leveraging semi-supervised pre-training for language identification and developing data-driven filtering techniques; (2) Developing practical MT models for under-served languages by leveraging massively multilingual models trained with supervised parallel data for over 100 high-resource languages and monolingual datasets for an additional 1,000+ languages; and (3) Studying the limitations of evaluation metrics for these languages and conducting qualitative analysis of the outputs from our MT models, highlighting several frequent error modes of these types of models.

We hope that our work provides useful insights to practitioners working towards building MT systems for currently understudied languages, and highlights research directions that can complement the weaknesses of massively multilingual models in data-sparse settings.

Figure 2: Plot of RTTLANGIDCHRF scores (loose) for languages as a function of log monolingual data size. With over 100,000 sentences, almost any language does reasonably well. Outliers are labeled with their language code. The largest outliers are English in Cyrillic script (en-Cyrl), which has excellent RTTLANGIDCHRF score but very little monolingual data, and Tibetan (bo), which has plenty monolingual text but very poor performance. In general, the languages above the trend line are close to high-resource languages (where the metric may also be fooled), and the languages below the trend line are linguistically distant from other languages in the model or have poor-quality data. Languages added to Google Translate as part of this effort (all unsupervised except Sorani Kurdish (ckb)) are marked with stars.