“Loss of Plasticity in Deep Continual Learning (Continual Backpropagation)”, Shibhansh Dohare, J. Fernando Hernandez-Garcia, Parash Rahman, Richard S. Sutton, A. Rupam Mahmood2023-06-23 (, )⁠:

[more evidence that large neural nets solve continual learning] Modern deep-learning systems are specialized to problem settings in which training occurs once and then never again, as opposed to continual-learning settings in which training occurs continually. If deep-learning systems are applied in a continual learning setting, then it is well known that they may fail to remember earlier examples. More fundamental, but less well known, is that they may also lose their ability to learn on new examples, a phenomenon called loss of plasticity.

We provide direct demonstrations of loss of plasticity using the MNIST and ImageNet datasets repurposed for continual learning as sequences of tasks. In ImageNet, binary classification performance dropped from 89% accuracy on an early task down to 77%, about the level of a linear network, on the 2000th task.

Loss of plasticity occurred with a wide range of deep network architectures, optimizers, activation functions, batch normalization, dropout, but was substantially eased by L2-regularization, particularly when combined with weight perturbation.

Further, we introduce a new algorithm—continual backpropagation—which slightly modifies conventional backpropagation to reinitialize a small fraction of less-used units after each example and appears to maintain plasticity indefinitely.

…Due to Adam’s robustness to non-stationary losses, one would have expected that Adam would result in a lower loss of plasticity than backpropagation. This is the opposite of what happens. Adam’s loss of plasticity can be categorized as catastrophic as it plummets drastically. Consistent with our previous results, Adam scores poorly in the 3 measures corresponding to the causes for the loss of plasticity. There is a dramatic drop in the effective rank of the network trained with Adam. We also tested Adam with different activation functions on the Slowly-changing regression problem and found that loss of plasticity with Adam is usually worse than with SGD.

Many methods that one might have thought would help mitigate the loss of plasticity substantially worsened the loss of plasticity. The loss of plasticity with Adam is particularly dramatic, and the network trained with Adam quickly lost almost all of its diversity, as measured by the effective rank. This dramatic loss of plasticity of Adam is an important result for deep reinforcement learning as Adam is the default optimizer in deep reinforcement learning and reinforcement learning is inherently continual due to the ever-changing policy.