“Random Initializations Performing above Chance and How to Find Them”, Frederik Benzing, Simon Schug, Robert Meier, Johannes von Oswald, Yassir Akram, Nicolas Zucchet, Laurence Aitchison, Angelika Steger2022-09-15 ()⁠:

Neural networks trained with stochastic gradient descent (SGD) starting from different random initializations typically find functionally very similar solutions, raising the question of whether there are meaningful differences between different SGD solutions.

Entezari et al recently conjectured that despite different initializations, the solutions found by SGD lie in the same loss valley after taking into account the permutation invariance of neural networks. Concretely, they hypothesize that any two solutions found by SGD can be permuted such that the linear interpolation between their parameters forms a path without statistically-significant increases in loss.

Here, we use a simple but powerful algorithm to find such permutations that allows us to obtain direct empirical evidence that the hypothesis is true in fully connected networks. Strikingly, we find that two networks already live in the same loss valley at the time of initialization and averaging their random, but suitably permuted initialization performs above chance. In contrast, for convolutional architectures, our evidence suggests that the hypothesis does not hold. Especially in a large learning rate regime, SGD seems to discover diverse modes.