“An Efficient 2D Method for Training Super-Large Deep Learning Models”, Qifan Xu, Shenggui Li, Chaoyu Gong, Yang You2021-04-12 (; similar)⁠:

Huge neural network models have shown unprecedented performance in real-world applications. However, due to memory constraints, model parallelism must beused to host large models that would otherwise not fit into the memory of a single device.

In this work, we propose Optimus, a highly efficient and scalable 2D-partition paradigm of model parallelism that would facilitate the training of infinitely large language models. In Optimus, activations are partitioned and distributed among devices, further reducing redundancy.

On 64 GPUs of TACC Frontera, Optimus achieves 1.48× speedup for training, 1.78× speedup for inference, and 8× increase in maximum batch size over Megatron. Optimus surpasses Megatron in scaling efficiency by a great margin.

The code is available at Github.