“Self Expanding Neural Networks”, Rupert Mitchell, Martin Mundt, Kristian Kersting2023-07-10 (, , )⁠:

[cf. Net2Net] The results of training a neural network are heavily dependent on the architecture chosen; and even a modification of only the size of the network, however small, typically involves restarting the training process. In contrast to this, we begin training with a small architecture, only increase its capacity as necessary for the problem, and avoid interfering with previous optimization while doing so.

We thereby introduce a natural gradient based approach which intuitively expands both the width and depth of a neural network when this is likely to substantially reduce the hypothetical converged training loss. We prove an upper bound on the “rate” at which neurons are added, and a computationally cheap lower bound on the expansion score.

We illustrate the benefits of such Self-Expanding Neural Networks in both classification and regression problems, including those where the appropriate architecture size is substantially uncertain a priori.

Figure 2: A single layer SENN (black, solid) is trained to approximate a target function (red, dashed) via non-linear least-squares regression on samples (blue, markers). The location of existing neurons is shown by vertical lines. The lower figures show ∆η′/ηc as a function of the location and scale of the nonlinearity introduced by a new neuron. Accepted and rejected proposals are marked in red and black respectively. From left to right we see the landscape before and immediately after the 4<sup>th</sup> neuron is added, before the fifth neuron is added, and at the end of training. SENN adds neurons where they are relevant in order to achieve a good final fit.
Figure 2: A single layer SENN (black, solid) is trained to approximate a target function (red, dashed) via non-linear least-squares regression on samples (blue, markers). The location of existing neurons is shown by vertical lines. The lower figures show ∆η′/ηc as a function of the location and scale of the nonlinearity introduced by a new neuron. Accepted and rejected proposals are marked in red and black respectively. From left to right we see the landscape before and immediately after the 4th neuron is added, before the fifth neuron is added, and at the end of training. SENN adds neurons where they are relevant in order to achieve a good final fit.
Figure 3: Classification is performed with SENN on the half moons dataset. The normalized layer addition score ∆η′/ηc is shown as a function of optimization steps; the horizontal bar shows the point above which a layer will be added. The score increases during 3 phases during which the SENN has initial zero, one and then two hidden layers. The respective decision boundary is shown at the beginning and end of these. These layer additions allow SENN to represent more complex decision boundaries when required for global expressivity.