“‘PaLM’ Tag”,2022-04-05 ():
![]()
Bibliography for tag
ai/nn/transformer/gpt/palm, most recent first: 2 related tags, 40 annotations, & 28 links (parent).
- See Also
- Links
- “OmegaPRM: Improve Mathematical Reasoning in Language Models by Automated Process Supervision”, et al 2024
- “To Believe or Not to Believe Your LLM”, et al 2024
- “Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-Modal LLMs in Video Analysis”, et al 2024
- “LLMs Achieve Adult Human Performance on Higher-Order Theory of Mind Tasks”, et al 2024
- “VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?”, et al 2024
- “
ArtPrompt: ASCII Art-Based Jailbreak Attacks against Aligned LLMs”, et al 2024- “Beyond Memorization: Violating Privacy Via Inference With Large Language Models”, et al 2023
- “HyperAttention: Long-Context Attention in Near-Linear Time”, et al 2023
- “FreshLLMs: Refreshing Large Language Models With Search Engine Augmentation”, et al 2023
- “How Robust Is Google’s Bard to Adversarial Image Attacks?”, et al 2023
- “Devising and Detecting Phishing: Large Language Models vs. Smaller Human Models”, et al 2023
- “CausalLM Is Not Optimal for In-Context Learning”, et al 2023
- “Simple Synthetic Data Reduces Sycophancy in Large Language Models”, et al 2023
- “Large Language Models Are Few-Shot Health Learners”, et al 2023
- “SeeGULL: A Stereotype Benchmark With Broad Geo-Cultural Coverage Leveraging Generative Models”, et al 2023
- “Q2d: Turning Questions into Dialogs to Teach Models How to Search”, et al 2023
- “Larger Language Models Do In-Context Learning Differently”, et al 2023
- “Characterizing Attribution and Fluency Tradeoffs for Retrieval-Augmented Large Language Models”, et al 2023
- “Interactive-Chain-Prompting (INTERCPT): Ambiguity Resolution for Crosslingual Conditional Generation With Interaction”, et al 2023
- “Memory Augmented Large Language Models Are Computationally Universal”, 2023
- “Med-PaLM: Large Language Models Encode Clinical Knowledge”, et al 2022
- “Character-Aware Models Improve Visual Text Rendering”, et al 2022
- “Efficiently Scaling Transformer Inference”, et al 2022
- “U-PaLM: Transcending Scaling Laws With 0.1% Extra Compute”, et al 2022
- “FLAN: Scaling Instruction-Finetuned Language Models”, et al 2022
- “Large Language Models Can Self-Improve”, et al 2022
- “RARR: Attributed Text Generation via Post-Hoc Research and Revision”, et al 2022
- “Challenging BIG-Bench Tasks (BBH) and Whether Chain-Of-Thought Can Solve Them”, et al 2022
- “Language Models Are Multilingual Chain-Of-Thought Reasoners”, et al 2022
- “ReAct: Synergizing Reasoning and Acting in Language Models”, et al 2022
- “AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model”, et al 2022
- “Inner Monologue: Embodied Reasoning through Planning With Language Models”, et al 2022
- “Solving Quantitative Reasoning Problems With Language Models”, et al 2022
- “Least-To-Most Prompting Enables Complex Reasoning in Large Language Models”, et al 2022
- “Unifying Language Learning Paradigms”, et al 2022
- “PaLM: Scaling Language Modeling With Pathways”, et al 2022
- “Do As I Can, Not As I Say (SayCan): Grounding Language in Robotic Affordances”, et al 2022
- “Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance”, 2022
- “PaLM § Figure 19: [Explaining a Joke / Inference Chaining] Each ’Input” Was Independently Prepended With the Same 2-Shot Exemplar Shown at the Top, and “Model Output’ Shows the Greedy Decoding Output of PaLM 540B. The Two Exemplar Jokes Are Known Jokes (explanations Written by Authors), While All Evaluated Jokes Were Written by the Authors. Of Course, These Jokes Do Share Abstract Premises With Existing Jokes (wordplay, Reliability, Humorous Analogies, Reversal-Of-Expectations). The Inference Chaining Examples Were Also Written by the Authors.”
- “AI Will Increase the Quantity—And Quality—Of Phishing Scams”
- Sort By Magic
- Miscellaneous
- Bibliography