Bibliography (3):
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
XLNet: Generalized Autoregressive Pretraining for Language Understanding
Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context