“Scale Efficiently: Insights from Pre-Training and Fine-Tuning Transformers”, Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler2021-09-22 (, )⁠:

There remain many open questions pertaining to the scaling behavior of Transformer architectures. These scaling decisions and findings can be critical, as training runs often come with an associated computational cost which have both financial and/or environmental impact.

The goal of this paper is to present scaling insights from pretraining and finetuning Transformers. While Kaplan et al 2020 presents a comprehensive study of the scaling behavior of Transformer language models, the scope is only on the upstream (pretraining) loss. Therefore, it is still unclear if these set of findings transfer to downstream task within the context of the pretrain-finetune paradigm.

The key findings of this paper are as follows: (1) we show that aside from only the model size, model shape matters for downstream fine-tuning, (2) scaling protocols operate differently at different compute regions, (3) widely adopted T5-base and T5-large sizes are Pareto-inefficient.

To this end, we present improved scaling protocols whereby our redesigned models achieve similar downstream fine-tuning quality while having 50% fewer parameters and training 40% faster compared to the widely adopted T5-base model.

We publicly release over 100 pretrained checkpoints of different T5 configurations to facilitate future research and analysis.

Figure 1: The predictability and unpredictability of pre-training versus fine-tuning. While the upstream pre-training performance measured by negative log-perplexity scales with model size quite independently from the model shape, the downstream performance (SuperGLUE (avg) score) does not. This indicates that the shape of models plays an important role on how it performs on the target task and the performance is not merely a function of parameter size.

…The overall findings and insights of the paper can be summarized as follows: