āTrained on 100 Million Words and Still in Shape: BERT Meets British National Corpusā, 2023-03-17 (; backlinks)ā :
While modern masked language models (LMs) are trained on ever larger corpora, we here explore the effects of down-scaling training to a modestly-sized but representative, well-balanced, and publicly available English text sourceāthe British National Corpus.
We show that pre-training on this carefully curated corpus can reach better performance than the original BERT model. We argue that this type of corpus has great potential as a language modeling benchmark.
To showcase this potential, we present fair, reproducible, and data-efficient comparative studies of LMs, in which we evaluate several training objectives and model architectures and replicate previous empirical results in a systematic way.
We propose an optimized LM architecture called LTG-BERT.