Bibliography (11):

  1. https://instructions.apps.allenai.org/

  2. Cross-Task Generalization via Natural Language Crowdsourcing Instructions

  3. Attention Is All You Need

  4. T0: Multitask Prompted Training Enables Zero-Shot Task Generalization

  5. Emergent Abilities of Large Language Models

  6. Scaling to Very Very Large Corpora for Natural Language Disambiguation

  7. Revisiting Unreasonable Effectiveness of Data in Deep Learning Era

  8. Deep Learning Scaling is Predictable, Empirically

  9. T5: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

  10. ZeroPrompt: Scaling Prompt-Based Pretraining to 1,000 Tasks Improves Zero-Shot Generalization

  11. Wikipedia Bibliography:

    1. ROUGE (metric)