ā€œSame Pre-Training Loss, Better Downstream: Implicit Bias Matters for Language Modelsā€, Hong Liu, Sang Michael Xie, Zhiyuan Li, Tengyu Ma2022-10-25 ()⁠:

Language modeling on large-scale datasets leads to impressive performance gains on various downstream language tasks. The validation pre-training loss (or perplexity in autoregressive language modeling) is often used as the evaluation metric when developing language models since the pre-training loss tends to be well-correlated with downstream performance (which is itself difficult to evaluate comprehensively).

Contrary to this conventional wisdom, this paper shows that (1) pre-training loss cannot fully explain downstream performance and (2) flatness of the model is well-correlated with downstream performance where pre-training loss is not.

On simplified datasets, we identify 3 ways to produce models with the same (statistically optimal) pre-training loss but different downstream performance: continue pre-training after convergence, increasing the model size, and changing the training algorithm. These experiments demonstrate the existence of implicit bias of pre-training algorithms/optimizers—among models with the same minimal pre-training loss, they implicitly prefer more transferable ones. Toward understanding this implicit bias, we prove that SGD with standard mini-batch noise implicitly prefers flatter minima in language models, and empirically observe a strong correlation between flatness and downstream performance among models with the same minimal pre-training loss.

We also prove in a synthetic language setting that among the models with the minimal pre-training loss, the flattest model transfers to downstream tasks.