“Deep Learning Training in Facebook Data Centers: Design of Scale-Up and Scale-Out Systems”, Maxim Naumov, John Kim, Dheevatsa Mudigere, Srinivas Sridharan, Xiaodong Wang, Whitney Zhao, Serhat Yilmaz, Changkyu Kim, Hector Yuen, Mustafa Ozdal, Krishnakumar Nair, Isabel Gao, Bor-Yiing Su, Jiyan Yang, Mikhail Smelyanskiy2020-03-20 (, ; similar)⁠:

Large-scale training is important to ensure high performance and accuracy of machine-learning models. At Facebook, we use many different models, including computer vision, video, and language models.

However, in this paper, we focus on the deep learning recommendation models (DLRMs), which are responsible for more than 50% of the training demand in our data centers. Recommendation models present unique challenges in training because they exercise not only compute but also memory capacity as well as memory and network bandwidth.

As model size and complexity increase, efficiently scaling training becomes a challenge. To address it, we design Zion—Facebook’s next-generation large-memory training platform that consists of both CPUs and accelerators.

Also, we discuss the design requirements of future scale-out training systems.