“Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond”, 2024-10-31 (; similar):
Deep learning sometimes appears to work in unexpected ways. In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network consisting of a sequence of first-order approximations telescoping out into a single empirically operational tool for practical analysis.
Across 3 case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena in the literature—including double descent, grokking…suggesting that grokking may reflect transition into a measurably benign overfitting regime during training, linear mode connectivity, and the challenges of applying deep learning on tabular data—highlighting that this model allows us to construct and extract metrics that help predict and understand the a priori unexpected performance of neural networks.
We also demonstrate that this model presents a pedagogical formalism allowing us to isolate components of the training process even in complex contemporary settings, providing a lens to reason about the effects of design choices such as architecture and optimization strategy, and reveals surprising parallels between neural network learning and gradient boosting.