“Noise Transforms Feed-Forward Networks into Sparse Coding Networks”, 2022-09-29 (; backlinks):
[see also] We find that noise alone induces networks to become top-k / sparse coding networks. This resolves a difference between biological and artificial neural networks with regards to how sparse they are and how this sparsity is implemented.
A hallmark of biological neural networks, which distinguishes them from their artificial counterparts, is the high degree of sparsity in their activations.
Here, we show that by simply injecting symmetric, random, noise during training in reconstruction or classification tasks, artificial neural networks with ReLU activation functions eliminate this difference; the neurons converge to a sparse coding solution where only a small fraction are active for any input. The resulting network learns receptive fields like those of primary visual cortex and remains sparse even when noise is removed in later stages of learning.
[Keywords: sparse coding, sparsity, top-k activation, noise, biologically inspired]