Transformer-based models have pushed state-of-the-art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over 150 studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue and approaches to compression. We then outline directions for future research.
ā¦Given the above evidence of overparameterization, it does not come as a surprise that BERT can be efficiently compressed with minimal accuracy loss, which would be highly desirable for real-world applications. Such efforts to date are summarized in Table 1. The main approaches are knowledge distillation, quantization, and pruningā¦If the ultimate goal of training BERT is compression, Liet al2020 recommend training larger models and compressing them heavily rather than compressing smaller models lightly.
Table 1: Comparison of BERT compression studies. Compression, performance retention, inference time speedup figures are given with respect to BERTbase, unless indicated otherwise. Performance retention is measured as a ratio of average scores achieved by a given model and by BERTbase. The subscript in the model description reflects the number of layers used.