Bibliography (3):
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
MegatronLM: Training Billion+ Parameter Language Models Using GPU Model Parallelism
Turing-NLG: A 17-billion-parameter language model by Microsoft