“‘PaLM 2’ Tag”,2023-02-16
![]()
Bibliography for tag
ai/nn/transformer/gpt/palm/2, most recent first: 28 annotations & 2 links (parent).
- See Also
- Links
- “Alphabet Q3 Earnings Call: CEO Sundar Pichai’s Remarks”
- “Scalable Watermarking for Identifying Large Language Model Outputs”
- “Inference Scaling for Long-Context Retrieval Augmented Generation”, et al 2024
- “Project Zero: From Naptime to Big Sleep: Using Large Language Models To Catch Vulnerabilities In Real-World Code”
- “Training Language Models to Self-Correct via Reinforcement Learning”, et al 2024
- “On Scalable Oversight With Weak LLMs Judging Strong LLMs”, et al 2024
- “Can Long-Context Language Models Subsume Retrieval, RAG, SQL, and More?”, et al 2024
- “What Are the Odds? Language Models Are Capable of Probabilistic Reasoning”, et al 2024
- “Can Language Models Use Forecasting Strategies?”, et al 2024
- “Grokked Transformers Are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization”, et al 2024
- “Many-Shot In-Context Learning”, et al 2024
- “Few-Shot Recalibration of Language Models”, et al 2024
- “Long-Form Factuality in Large Language Models”, et al 2024
- “When Scaling Meets LLM Finetuning: The Effect of Data, Model and Finetuning Method”, et al 2024
- “ReST Meets ReAct: Self-Improvement for Multi-Step Reasoning LLM Agent”, et al 2023
- “Rich Human Feedback for Text-To-Image Generation”, et al 2023
- “Beyond Human Data: Scaling Self-Training for Problem-Solving With Language Models (ReSTEM)”, et al 2023
- “Universal Self-Consistency for Large Language Model Generation”, et al 2023
- “Instruction-Following Evaluation for Large Language Models”, et al 2023
- “A Systematic Comparison of Syllogistic Reasoning in Humans and Language Models”, et al 2023
- “PAIR: Jailbreaking Black Box Large Language Models in 20 Queries”, et al 2023
- “RLAIF: Scaling Reinforcement Learning from Human Feedback With AI Feedback”, et al 2023
- “Android in the Wild: A Large-Scale Dataset for Android Device Control”, et al 2023
- “Google’s Newest AI Model Uses Nearly 5× More Text Data for Training Than Its Predecessor”, 2023
- “Pretraining Language Models With Human Preferences”, et al 2023
- “Working With AI (Part 2): Code Conversion”
- “How Good Are LLMs at Doing ML on an Unknown Dataset?”
- “What Happened to BERT & T5? On Transformer Encoders, PrefixLM and Denoising Objectives”, 2024
- Sort By Magic
- Miscellaneous
- Bibliography