Bibliography (3):
GPT-3: Language Models are Few-Shot Learners
MegatronLM: Training Billion+ Parameter Language Models Using GPU Model Parallelism
MMLU: Measuring Massive Multitask Language Understanding