Bibliography (19):

  1. https://research.google/blog/ul2-20b-an-open-source-unified-language-learner/

  2. UniLM: Unified Language Model Pre-training for Natural Language Understanding and Generation

  3. MAE: Masked Autoencoders Are Scalable Vision Learners

  4. T5: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

  5. GPT-3: Language Models are Few-Shot Learners

  6. SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems

  7. https://github.com/google-research/google-research/tree/master/ul2

  8. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models

  9. PaLM: Scaling Language Modeling with Pathways

  10. LaMDA: Language Models for Dialog Applications

  11. Don’t Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization

  12. LaMDA: Our Breakthrough Conversation Technology

  13. Training Verifiers to Solve Math Word Problems

  14. Are NLP Models really able to Solve Simple Math Word Problems?

  15. A Diverse Corpus for Evaluating and Developing English Math Word Problem Solvers

  16. Program Induction by Rationale Generation: Learning to Solve and Explain Algebraic Word Problems

  17. MAWPS: A Math Word Problem Repository

  18. Self-Consistency Improves Chain-of-Thought Reasoning in Language Models

  19. Wikipedia Bibliography:

    1. ROUGE (metric)