ā€œInterlocking Backpropagation: Improving Depthwise Model-Parallelismā€, Aidan N. Gomez, Oscar Key, Kuba Perlin, Stephen Gou, Nick Frosst, Jeff Dean, Yarin Gal2020-10-08 (; similar)⁠:

The number of parameters in state-of-the-art neural networks has drastically increased in recent years. This surge of interest in large scale neural networks has motivated the development of new distributed training strategies enabling such models. One such strategy is model-parallel distributed training. Unfortunately, model-parallelism suffers from poor resource usage, which leads to wasted resources.

In this work, we improve upon recent developments in an idealised model-parallel optimization setting: local learning. Motivated by poor resource usage, we introduce a class of intermediary strategies between local and global learning referred to as interlocking backpropagation. These strategies preserve many of the compute-efficiency advantages of local optimization, while recovering much of the task performance achieved by global optimization. We assess our strategies on both image classification ResNets and Transformer language models, finding that our strategy consistently out-performs local learning in terms of task performance, and out-performs global learning in training efficiency.