“Wide Neural Networks Forget Less Catastrophically”, 2021-10-21 ():
A primary focus area in continual learning research is alleviating the “catastrophic forgetting” problem in neural networks by designing new algorithms that are more robust to the distribution shifts. While the recent progress in continual learning literature is encouraging, our understanding of what properties of neural networks contribute to catastrophic forgetting is still limited.
To address this, instead of focusing on continual learning algorithms, in this work, we focus on the model itself and study the impact of “width” of the neural network architecture on catastrophic forgetting, and show that width has a surprisingly effect on forgetting. To explain this effect, we study the learning dynamics of the network from various perspectives such as gradient orthogonality, sparsity, and lazy training regime.
We provide potential explanations that are consistent with the empirical results across different architectures and continual learning benchmarks.
…Figure 6a shows the norm of the gradients of layer 1 for different MLP depths. It can be seen from the figure that the gradient norm on the earlier layers increases with the depth. For example, when the network is trained for task 5, the gradient norm on layer 1 for an 8-layer network is almost 3× as that of the 2-layer network. In contrast, as depicted in Figure 6b, increasing the width has a minimal or even decreasing effect on gradient norm. We note here that the deep learning community has developed numerous strategies to avoid exploding gradients, and that extensively studying those is not the purpose here. We use the exploding gradient analysis to understand the negative effect of depths in our experiments.
In conclusion, we have observed that wider networks have sparser gradients with more orthogonal gradients across tasks. In addition, the training dynamics of wide networks become more similar to the lazy training regime. Finally, the gradient norm in wide and shallow models does not increase as fast as in deeper and thinner models.