- See Also
-
Links
- “Med-PaLM: Large Language Models Encode Clinical Knowledge”, Et Al 2022
- “One Embedder, Any Task: Instruction-Finetuned Text Embeddings (INSTRUCTOR)”, Et Al 2022
- “Unnatural Instructions: Tuning Language Models With (Almost) No Human Labor”, Et Al 2022
- “BLOOMZ/mT0: Crosslingual Generalization through Multitask Finetuning”, Et Al 2022
- “Help Me Write a Poem: Instruction Tuning As a Vehicle for Collaborative Poetry Writing (CoPoet)”, Et Al 2022
- “FLAN: Scaling Instruction-Finetuned Language Models”, Et Al 2022
- “Language Models Are Multilingual Chain-of-Thought Reasoners”, Et Al 2022
- “LINGUIST: Language Model Instruction Tuning to Generate Annotated Utterances for Intent Classification and Slot Tagging”, Et Al 2022
- “Few-shot Adaptation Works With UnpredicTable Data”, Et Al 2022
- “RST: ReStructured Pre-training”, 2022
- “InstructDial: Improving Zero and Few-shot Generalization in Dialogue through Instruction Tuning”, Et Al 2022
- “TK-Instruct: Benchmarking Generalization via In-Context Instructions on 1,600+ Language Tasks”, Et Al 2022
- “What Language Model Architecture and Pretraining Objective Work Best for Zero-Shot Generalization?”, Et Al 2022
- “Reasoning Like Program Executors”, Et Al 2022
- “ZeroPrompt: Scaling Prompt-Based Pretraining to 1,000 Tasks Improves Zero-Shot Generalization”, Et Al 2022
- “ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning”, Et Al 2021
- “MetaICL: Learning to Learn In Context”, Et Al 2021
- “T0: Multitask Prompted Training Enables Zero-Shot Task Generalization”, Et Al 2021
- “FLAN: Finetuned Language Models Are Zero-Shot Learners”, Et Al 2021
- “CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in NLP”, Et Al 2021
- “Cross-Task Generalization via Natural Language Crowdsourcing Instructions”, Et Al 2021
- “Muppet: Massive Multi-task Representations With Pre-Finetuning”, Et Al 2021
- Miscellaneous
- Link Bibliography
See Also
Links
“Med-PaLM: Large Language Models Encode Clinical Knowledge”, Et Al 2022
“Med-PaLM: Large Language Models Encode Clinical Knowledge”, 2022-12-26 ( ; similar; bibliography)
“One Embedder, Any Task: Instruction-Finetuned Text Embeddings (INSTRUCTOR)”, Et Al 2022
“One Embedder, Any Task: Instruction-Finetuned Text Embeddings (INSTRUCTOR)”, 2022-12-19 ( ; similar; bibliography)
“Unnatural Instructions: Tuning Language Models With (Almost) No Human Labor”, Et Al 2022
“Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor”, 2022-12-19 ( ; similar)
“BLOOMZ/mT0: Crosslingual Generalization through Multitask Finetuning”, Et Al 2022
“BLOOMZ/mT0: Crosslingual Generalization through Multitask Finetuning”, 2022-11-03 ( ; similar)
“Help Me Write a Poem: Instruction Tuning As a Vehicle for Collaborative Poetry Writing (CoPoet)”, Et Al 2022
“Help me write a poem: Instruction Tuning as a Vehicle for Collaborative Poetry Writing (CoPoet)”, 2022-10-25 ( ; backlinks; similar; bibliography)
“FLAN: Scaling Instruction-Finetuned Language Models”, Et Al 2022
“FLAN: Scaling Instruction-Finetuned Language Models”, 2022-10-20 ( ; similar; bibliography)
“Language Models Are Multilingual Chain-of-Thought Reasoners”, Et Al 2022
“Language Models are Multilingual Chain-of-Thought Reasoners”, 2022-10-06 ( ; similar; bibliography)
“LINGUIST: Language Model Instruction Tuning to Generate Annotated Utterances for Intent Classification and Slot Tagging”, Et Al 2022
“LINGUIST: Language Model Instruction Tuning to Generate Annotated Utterances for Intent Classification and Slot Tagging”, 2022-09-20 (similar)
“Few-shot Adaptation Works With UnpredicTable Data”, Et Al 2022
“Few-shot Adaptation Works with UnpredicTable Data”, 2022-08-01 ( ; similar)
“RST: ReStructured Pre-training”, 2022
“RST: reStructured Pre-training”, 2022-06-22 ( ; similar)
“InstructDial: Improving Zero and Few-shot Generalization in Dialogue through Instruction Tuning”, Et Al 2022
“InstructDial: Improving Zero and Few-shot Generalization in Dialogue through Instruction Tuning”, 2022-05-25 ( ; similar)
“TK-Instruct: Benchmarking Generalization via In-Context Instructions on 1,600+ Language Tasks”, Et Al 2022
“Tk-Instruct: Benchmarking Generalization via In-Context Instructions on 1,600+ Language Tasks”, 2022-04-16 ( ; backlinks; similar; bibliography)
“What Language Model Architecture and Pretraining Objective Work Best for Zero-Shot Generalization?”, Et Al 2022
“What Language Model Architecture and Pretraining Objective Work Best for Zero-Shot Generalization?”, 2022-04-12 ( ; backlinks; similar)
“Reasoning Like Program Executors”, Et Al 2022
“Reasoning Like Program Executors”, 2022-01-27 ( ; similar; bibliography)
“ZeroPrompt: Scaling Prompt-Based Pretraining to 1,000 Tasks Improves Zero-Shot Generalization”, Et Al 2022
“ZeroPrompt: Scaling Prompt-Based Pretraining to 1,000 Tasks Improves Zero-Shot Generalization”, 2022-01-18 ( ; backlinks; similar; bibliography)
“ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning”, Et Al 2021
“ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning”, 2021-11-22 ( ; similar)
“MetaICL: Learning to Learn In Context”, Et Al 2021
“MetaICL: Learning to Learn In Context”, 2021-10-29 ( ; similar)
“T0: Multitask Prompted Training Enables Zero-Shot Task Generalization”, Et Al 2021
“T0: Multitask Prompted Training Enables Zero-Shot Task Generalization”, 2021-10-15 ( ; backlinks; similar)
“FLAN: Finetuned Language Models Are Zero-Shot Learners”, Et Al 2021
“FLAN: Finetuned Language Models Are Zero-Shot Learners”, 2021-09-03 ( ; similar)
“CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in NLP”, Et Al 2021
“CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in NLP”, 2021-04-18 ( ; similar)
“Cross-Task Generalization via Natural Language Crowdsourcing Instructions”, Et Al 2021
“Cross-Task Generalization via Natural Language Crowdsourcing Instructions”, 2021-04-18 (backlinks; similar)
“Muppet: Massive Multi-task Representations With Pre-Finetuning”, Et Al 2021
“Muppet: Massive Multi-task Representations with Pre-Finetuning”, 2021-01-26 ( ; similar)
Miscellaneous
Link Bibliography
-
https://arxiv.org/abs/2212.13138#google
: “Med-PaLM: Large Language Models Encode Clinical Knowledge”, : -
https://arxiv.org/abs/2212.09741
: “One Embedder, Any Task: Instruction-Finetuned Text Embeddings (INSTRUCTOR)”, : -
https://arxiv.org/abs/2210.13669
: “Help Me Write a Poem: Instruction Tuning As a Vehicle for Collaborative Poetry Writing (CoPoet)”, Tuhin Chakrabarty, Vishakh Padmakumar, He He: -
https://arxiv.org/abs/2210.11416#google
: “FLAN: Scaling Instruction-Finetuned Language Models”, : -
https://arxiv.org/abs/2210.03057#google
: “Language Models Are Multilingual Chain-of-Thought Reasoners”, : -
https://arxiv.org/abs/2204.07705
: “T<em>k< / em>-Instruct: Benchmarking Generalization via In-Context Instructions on 1,600+ Language Tasks”, : -
https://arxiv.org/abs/2201.11473#microsoft
: “Reasoning Like Program Executors”, Xinyu Pi, Qian Liu, Bei Chen, Morteza Ziyadi, Zeqi Lin, Yan Gao, Qiang Fu, Jian-Guang Lou, Weizhu Chen: -
https://arxiv.org/abs/2201.06910
: “ZeroPrompt: Scaling Prompt-Based Pretraining to 1,000 Tasks Improves Zero-Shot Generalization”, Hanwei Xu, Yujun Chen, Yulun Du, Nan Shao, Yanggang Wang, Haiyu Li, Zhilin Yang: