- See Also
- Gwern
-
Links
- “Connecting the Dots: LLMs Can Infer and Verbalize Latent Structure from Disparate Training Data”, Treutlein et al 2024
- “OlympicArena: Benchmarking Multi-Discipline Cognitive Reasoning for Superintelligent AI”, Huang et al 2024
- “What Are the Odds? Language Models Are Capable of Probabilistic Reasoning”, Paruchuri et al 2024
- “Probing the Decision Boundaries of In-Context Learning in Large Language Models”, Zhao et al 2024
- “Development Cost of ARC GPT-4o Prototype”, Greenblatt 2024
- “GUI-WORLD: A Dataset for GUI-Oriented Multimodal LLM-Based Agents”, Chen et al 2024
- “Are We Done With MMLU?”, Gema et al 2024
- “Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-Modal LLMs in Video Analysis”, Fu et al 2024
- “Intelligent Go-Explore (IGE): Standing on the Shoulders of Giant Foundation Models”, Lu et al 2024
- “DeTikZify: Synthesizing Graphics Programs for Scientific Figures and Sketches With TikZ”, Belouadi et al 2024
- “Grokked Transformers Are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization”, Wang et al 2024
- “ChatGPT Will Be Able to Talk to You like Scarlett Johansson in Her / Upgrades to ChatGPT’s Voice Mode Bring It Closer to the Vision of a Responsive AI Assistant—And Sam Altman Seems to Know It”, Robison 2024
- “GSM1k: A Careful Examination of Large Language Model Performance on Grade School Arithmetic”, Zhang et al 2024
- “Aligning LLM Agents by Learning Latent Preference from User Edits”, Gao et al 2024
- “Automated Social Science: Language Models As Scientist and Subjects”, Manning et al 2024
- “Enhancing Confidence Expression in Large Language Models Through Learning from Past Experience”, Han et al 2024
- “Do LLMs Play Dice? Exploring Probability Distribution Sampling in Large Language Models for Behavioral Simulation”, Gu et al 2024
- “Is ChatGPT Transforming Academics’ Writing Style?”, Geng & Trotta 2024
- “From Words to Numbers: Your Large Language Model Is Secretly A Capable Regressor When Given In-Context Examples”, Vacareanu et al 2024
- “Election Workers Are Drowning in Records Requests. AI Chatbots Could Make It Worse: Experts Worry That Election Deniers Could Weaponize Chatbots to Overwhelm and Slow down Local Officials”, Elliott 2024
- “Visualization-Of-Thought Elicits Spatial Reasoning in Large Language Models”, Wu et al 2024
- “FABLES: Evaluating Faithfulness and Content Selection in Book-Length Summarization”, Kim et al 2024
- “Re-Evaluating GPT-4’s Bar Exam Performance”, Martínez 2024
- “A Peter Thiel-Backed AI Startup, Cognition Labs, Seeks $2 Billion Valuation: Funding round Could Increase Startup’s Valuation Nearly Sixfold in a Matter of Weeks, Reflecting AI Frenzy”, Jin 2024
- “Vulnerability Detection With Code Language Models: How Far Are We?”, Ding et al 2024
- “Gold-Medalist Coders Build an AI That Can Do Their Job for Them: A New Startup Called Cognition AI Can Turn a User’s Prompt into a Website or Video Game”, Vance 2024
- “Playing NetHack With LLMs: Potential & Limitations As Zero-Shot Agents (NetPlay)”, Jeurissen et al 2024
- “Functional Benchmarks for Robust Evaluation of Reasoning Performance, and the Reasoning Gap”, Srivastava et al 2024
- “Tokenization Counts: the Impact of Tokenization on Arithmetic in Frontier LLMs”, Singh & Strouse 2024
-
“
ArtPrompt
: ASCII Art-Based Jailbreak Attacks against Aligned LLMs”, Jiang et al 2024 - “Tasks That Language Models Don’t Learn”, Lee & Lim 2024
- “Using Counterfactual Tasks to Evaluate the Generality of Analogical Reasoning in Large Language Models”, Lewis & Mitchell 2024
- “The Non-Effect of Sampling Temperature on Problem Solving in GPT-3.5/GPT-4”, Renze & Guven 2024
- “Better Call GPT, Comparing Large Language Models Against Lawyers”, Martin et al 2024
- “I Am a Strange Dataset: Metalinguistic Tests for Language Models”, Thrush et al 2024
- “GPT-4-V(ision) Is a Human-Aligned Evaluator for Text-To-3D Generation”, Wu et al 2024
- “Leveraging Large Language Models to Boost Dafny’s Developers Productivity”, Silva et al 2024
- “GPT-4 Passes the Bar Exam”, Katz et al 2024
- “Large Language Models Are Able to Downplay Their Cognitive Abilities to Fit the Persona They Simulate”, Milička et al 2024
- “WaveCoder: Widespread And Versatile Enhanced Instruction Tuning With Refined Data Generation”, Yu et al 2023
- “PRER: Modeling Complex Mathematical Reasoning via Large Language Model Based MathAgent”, Liao et al 2023
- “Can Generalist Foundation Models Outcompete Special-Purpose Tuning? Case Study in Medicine”, Nori et al 2023
- “GPQA: A Graduate-Level Google-Proof Q&A Benchmark”, Rein et al 2023
- 42irrationalist @ "2023-11-19"
- “Llamas Know What GPTs Don’t Show: Surrogate Models for Confidence Estimation”, Shrivastava et al 2023
- “Comparing Humans, GPT-4, and GPT-4-V On Abstraction and Reasoning Tasks”, Mitchell et al 2023
- “The Impact of Large Language Models on Scientific Discovery: a Preliminary Study Using GPT-4”, AI4Science & Quantum 2023
- “Accuracy of a Vision-Language Model on Challenging Medical Cases”, Buckley et al 2023
- “Large Language Models Can Strategically Deceive Their Users When Put Under Pressure”, Scheurer et al 2023
- “Augmenting Large Language Models With Chemistry Tools”, Bran et al 2023
- “FANToM: A Benchmark for Stress-Testing Machine Theory of Mind in Interactions”, Kim et al 2023
- “Branch-Solve-Merge Improves Large Language Model Evaluation and Generation”, Saha et al 2023
- “Eureka: Human-Level Reward Design via Coding Large Language Models”, Ma et al 2023
- “Set-Of-Mark Prompting Unleashes Extraordinary Visual Grounding in GPT-4-V”, Yang et al 2023
- “Large Language Model Prediction Capabilities: Evidence from a Real-World Forecasting Tournament”, Schoenegger & Park 2023
- “Data Contamination Through the Lens of Time”, Roberts et al 2023
- “Can GPT Models Be Financial Analysts? An Evaluation of ChatGPT and GPT-4 on Mock CFA Exams”, Callanan et al 2023
- “Beyond Memorization: Violating Privacy Via Inference With Large Language Models”, Staab et al 2023
- “SWE-Bench: Can Language Models Resolve Real-World GitHub Issues?”, Jimenez et al 2023
- “Can a Computer Outfake a Human [personality]?”, Phillips & Robie 2023
- “Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models”, Zhou et al 2023
- “FreshLLMs: Refreshing Large Language Models With Search Engine Augmentation”, Vu et al 2023
- “Low-Resource Languages Jailbreak GPT-4”, Yong et al 2023
- “An Evolutionary Model of Personality Traits Related to Cooperative Behavior Using a Large Language Model”, Suzuki & Arita 2023
- “UltraFeedback: Boosting Language Models With High-Quality Feedback”, Cui et al 2023
- “MTOB: A Benchmark for Learning to Translate a New Language from One Grammar Book”, Tanzer et al 2023
- “Embers of Autoregression: Understanding Large Language Models Through the Problem They Are Trained to Solve”, McCoy et al 2023
- “The Cambridge Law Corpus: A Corpus for Legal AI Research”, Östling et al 2023
- “The Reversal Curse: LLMs Trained on "A Is B" Fail to Learn "B Is A"”, Berglund et al 2023
- “From Sparse to Dense: GPT-4 Summarization With Chain of Density (CoD) Prompting”, Adams et al 2023
- “Devising and Detecting Phishing: Large Language Models vs. Smaller Human Models”, Heiding et al 2023
- “ExpeL: LLM Agents Are Experiential Learners”, Zhao et al 2023
- “LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models”, Guha et al 2023
- “Solving Challenging Math Word Problems Using GPT-4 Code Interpreter With Code-Based Self-Verification”, Zhou et al 2023
- “OpenAI Cribbed Our Tax Example, But Can GPT-4 Really Do Tax?”, Blair-Stanek et al 2023
- “Testing GPT-4 With Wolfram Alpha and Code Interpreter Plug-Ins on Math and Science Problems”, Davis & Aaronson 2023
- “The ConceptARC Benchmark: Evaluating Understanding and Generalization in the ARC Domain”, Moskvichev et al 2023
- “I’m a Screenwriter. These AI Jokes Give Me Nightmares”, Rich 2023
- “A LLM Assisted Exploitation of AI-Guardian”, Carlini 2023
- “OpenAI Worries About What Its Chatbot Will Say About People’s Faces: An Advanced Version of ChatGPT Can Analyze Images and Is Already Helping the Blind. But Its Ability to Put a Name to a Face Is One Reason the Public Doesn’t Have Access to It”, Hill 2023
- “GPT-4, an Artificial Intelligence Large Language Model, Exhibits High Levels of Accuracy on Dermatology Specialty Certificate Exam Questions”, Shetty et al 2023
- “Distilling Large Language Models for Biomedical Knowledge Extraction: A Case Study on Adverse Drug Events”, Gu et al 2023
- “Explaining Competitive-Level Programming Solutions Using LLMs”, Li et al 2023
- “Hoodwinked: Deception and Cooperation in a Text-Based Game for Language Models”, O’Gara 2023
- “LeanDojo: Theorem Proving With Retrieval-Augmented Language Models”, Yang et al 2023
- “ARIES: A Corpus of Scientific Paper Edits Made in Response to Peer Reviews”, D’Arcy et al 2023
- “Evaluating Superhuman Models With Consistency Checks”, Fluri et al 2023
- “Evaluating the Robustness of Text-To-Image Diffusion Models against Real-World Attacks”, Gao et al 2023
- “ChessGPT: Bridging Policy Learning and Language Modeling”, Feng et al 2023
- “Large Language Models As Tax Attorneys: A Case Study in Legal Capabilities Emergence”, Nay et al 2023
- “Can Large Language Models Democratize Access to Dual-Use Biotechnology?”, Soice et al 2023
- “Let’s Verify Step by Step”, Lightman et al 2023
- “GPT4GEO: How a Language Model Sees the World’s Geography”, Roberts et al 2023
- “LLMs and the Abstraction and Reasoning Corpus: Successes, Failures, and the Importance of Object-Based Representations”, Xu et al 2023
- “Learning to Generate Novel Scientific Directions With Contextualized Literature-Based Discovery”, Wang et al 2023
- “WikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia”, Semnani et al 2023
- “How Language Model Hallucinations Can Snowball”, Zhang et al 2023
- “C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models”, Huang et al 2023
- “Large Language Models Can Be Used To Effectively Scale Spear Phishing Campaigns”, Hazell 2023
- “Boosting Theory-Of-Mind Performance in Large Language Models via Prompting”, Moghaddam & Honey 2023
- “Today Was the First Day That I Could Definitively Say That GPT-4 Has Saved Me a Substantial Amount of Tedious Work”, Tao 2023
- “Humans in Humans Out: On GPT Converging Toward Common Sense in Both Success and Failure”, Koralus & Wang-Maścianica 2023
- “Advances in Apparent Conceptual Physics Reasoning in GPT-4”, West 2023
- “Performance of ChatGPT on Free-Response, Clinical Reasoning Exams”, Strong et al 2023
- “Reflexion: Language Agents With Verbal Reinforcement Learning”, Shinn et al 2023
- “How Well Do Large Language Models Perform in Arithmetic Tasks?”, Yuan et al 2023
- “GPT-4 Technical Report § Limitations: Calibration”, OpenAI 2023 (page 12 org openai)
- “Salesforce Announces Einstein GPT, the World’s First Generative AI for CRM”, Salesforce 2023
- “Large Language Models Are State-Of-The-Art Evaluators of Translation Quality”, Kocmi & Federmann 2023
- “Not What You’ve Signed up For: Compromising Real-World LLM-Integrated Applications With Indirect Prompt Injection”, Greshake et al 2023
- “Harvey, Which Uses AI to Answer Legal Questions, Lands Cash from OpenAI”, Wiggers 2022
- “Trading Off Compute in Training and Inference”
- “Connecting the Dots: LLMs Can Infer & Verbalize Latent Structure from Training Data”
- “Language Models Model Us”
- “AI Will Increase the Quantity—And Quality—Of Phishing Scams”
- Sort By Magic
- Miscellaneous
- Link Bibliography
See Also
Gwern
“CQK Is The First Unused TLA”, Gwern 2023
Links
“Connecting the Dots: LLMs Can Infer and Verbalize Latent Structure from Disparate Training Data”, Treutlein et al 2024
Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data
“OlympicArena: Benchmarking Multi-Discipline Cognitive Reasoning for Superintelligent AI”, Huang et al 2024
OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI
“What Are the Odds? Language Models Are Capable of Probabilistic Reasoning”, Paruchuri et al 2024
What Are the Odds? Language Models Are Capable of Probabilistic Reasoning
“Probing the Decision Boundaries of In-Context Learning in Large Language Models”, Zhao et al 2024
Probing the Decision Boundaries of In-context Learning in Large Language Models
“Development Cost of ARC GPT-4o Prototype”, Greenblatt 2024
“GUI-WORLD: A Dataset for GUI-Oriented Multimodal LLM-Based Agents”, Chen et al 2024
GUI-WORLD: A Dataset for GUI-oriented Multimodal LLM-based Agents
“Are We Done With MMLU?”, Gema et al 2024
“Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-Modal LLMs in Video Analysis”, Fu et al 2024
Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis
“Intelligent Go-Explore (IGE): Standing on the Shoulders of Giant Foundation Models”, Lu et al 2024
Intelligent Go-Explore (IGE): Standing on the Shoulders of Giant Foundation Models
“DeTikZify: Synthesizing Graphics Programs for Scientific Figures and Sketches With TikZ”, Belouadi et al 2024
DeTikZify: Synthesizing Graphics Programs for Scientific Figures and Sketches with TikZ
“Grokked Transformers Are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization”, Wang et al 2024
Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization
“ChatGPT Will Be Able to Talk to You like Scarlett Johansson in Her / Upgrades to ChatGPT’s Voice Mode Bring It Closer to the Vision of a Responsive AI Assistant—And Sam Altman Seems to Know It”, Robison 2024
“GSM1k: A Careful Examination of Large Language Model Performance on Grade School Arithmetic”, Zhang et al 2024
GSM1k: A Careful Examination of Large Language Model Performance on Grade School Arithmetic
“Aligning LLM Agents by Learning Latent Preference from User Edits”, Gao et al 2024
Aligning LLM Agents by Learning Latent Preference from User Edits
“Automated Social Science: Language Models As Scientist and Subjects”, Manning et al 2024
Automated Social Science: Language Models as Scientist and Subjects
“Enhancing Confidence Expression in Large Language Models Through Learning from Past Experience”, Han et al 2024
Enhancing Confidence Expression in Large Language Models Through Learning from Past Experience
“Do LLMs Play Dice? Exploring Probability Distribution Sampling in Large Language Models for Behavioral Simulation”, Gu et al 2024
“Is ChatGPT Transforming Academics’ Writing Style?”, Geng & Trotta 2024
“From Words to Numbers: Your Large Language Model Is Secretly A Capable Regressor When Given In-Context Examples”, Vacareanu et al 2024
“Election Workers Are Drowning in Records Requests. AI Chatbots Could Make It Worse: Experts Worry That Election Deniers Could Weaponize Chatbots to Overwhelm and Slow down Local Officials”, Elliott 2024
“Visualization-Of-Thought Elicits Spatial Reasoning in Large Language Models”, Wu et al 2024
Visualization-of-Thought Elicits Spatial Reasoning in Large Language Models
“FABLES: Evaluating Faithfulness and Content Selection in Book-Length Summarization”, Kim et al 2024
FABLES: Evaluating faithfulness and content selection in book-length summarization
“Re-Evaluating GPT-4’s Bar Exam Performance”, Martínez 2024
“A Peter Thiel-Backed AI Startup, Cognition Labs, Seeks $2 Billion Valuation: Funding round Could Increase Startup’s Valuation Nearly Sixfold in a Matter of Weeks, Reflecting AI Frenzy”, Jin 2024
“Vulnerability Detection With Code Language Models: How Far Are We?”, Ding et al 2024
Vulnerability Detection with Code Language Models: How Far Are We?
“Gold-Medalist Coders Build an AI That Can Do Their Job for Them: A New Startup Called Cognition AI Can Turn a User’s Prompt into a Website or Video Game”, Vance 2024
“Playing NetHack With LLMs: Potential & Limitations As Zero-Shot Agents (NetPlay)”, Jeurissen et al 2024
Playing NetHack with LLMs: Potential & Limitations as Zero-Shot Agents (NetPlay)
“Functional Benchmarks for Robust Evaluation of Reasoning Performance, and the Reasoning Gap”, Srivastava et al 2024
Functional Benchmarks for Robust Evaluation of Reasoning Performance, and the Reasoning Gap
“Tokenization Counts: the Impact of Tokenization on Arithmetic in Frontier LLMs”, Singh & Strouse 2024
Tokenization counts: the impact of tokenization on arithmetic in frontier LLMs
“ArtPrompt
: ASCII Art-Based Jailbreak Attacks against Aligned LLMs”, Jiang et al 2024
ArtPrompt
: ASCII Art-based Jailbreak Attacks against Aligned LLMs
“Tasks That Language Models Don’t Learn”, Lee & Lim 2024
“Using Counterfactual Tasks to Evaluate the Generality of Analogical Reasoning in Large Language Models”, Lewis & Mitchell 2024
“The Non-Effect of Sampling Temperature on Problem Solving in GPT-3.5/GPT-4”, Renze & Guven 2024
The Non-Effect of Sampling Temperature on Problem Solving in GPT-3.5/GPT-4
“Better Call GPT, Comparing Large Language Models Against Lawyers”, Martin et al 2024
Better Call GPT, Comparing Large Language Models Against Lawyers
“I Am a Strange Dataset: Metalinguistic Tests for Language Models”, Thrush et al 2024
I am a Strange Dataset: Metalinguistic Tests for Language Models
“GPT-4-V(ision) Is a Human-Aligned Evaluator for Text-To-3D Generation”, Wu et al 2024
GPT-4-V(ision) is a Human-Aligned Evaluator for Text-to-3D Generation
“Leveraging Large Language Models to Boost Dafny’s Developers Productivity”, Silva et al 2024
Leveraging Large Language Models to Boost Dafny’s Developers Productivity
“GPT-4 Passes the Bar Exam”, Katz et al 2024
“Large Language Models Are Able to Downplay Their Cognitive Abilities to Fit the Persona They Simulate”, Milička et al 2024
“WaveCoder: Widespread And Versatile Enhanced Instruction Tuning With Refined Data Generation”, Yu et al 2023
WaveCoder: Widespread And Versatile Enhanced Instruction Tuning with Refined Data Generation
“PRER: Modeling Complex Mathematical Reasoning via Large Language Model Based MathAgent”, Liao et al 2023
PRER: Modeling Complex Mathematical Reasoning via Large Language Model based MathAgent
“Can Generalist Foundation Models Outcompete Special-Purpose Tuning? Case Study in Medicine”, Nori et al 2023
Can Generalist Foundation Models Outcompete Special-Purpose Tuning? Case Study in Medicine
“GPQA: A Graduate-Level Google-Proof Q&A Benchmark”, Rein et al 2023
42irrationalist @ "2023-11-19"
“Llamas Know What GPTs Don’t Show: Surrogate Models for Confidence Estimation”, Shrivastava et al 2023
Llamas Know What GPTs Don’t Show: Surrogate Models for Confidence Estimation
“Comparing Humans, GPT-4, and GPT-4-V On Abstraction and Reasoning Tasks”, Mitchell et al 2023
Comparing Humans, GPT-4, and GPT-4-V On Abstraction and Reasoning Tasks
“The Impact of Large Language Models on Scientific Discovery: a Preliminary Study Using GPT-4”, AI4Science & Quantum 2023
The Impact of Large Language Models on Scientific Discovery: a Preliminary Study using GPT-4
“Accuracy of a Vision-Language Model on Challenging Medical Cases”, Buckley et al 2023
Accuracy of a Vision-Language Model on Challenging Medical Cases
“Large Language Models Can Strategically Deceive Their Users When Put Under Pressure”, Scheurer et al 2023
Large Language Models can Strategically Deceive their Users when Put Under Pressure
“Augmenting Large Language Models With Chemistry Tools”, Bran et al 2023
“FANToM: A Benchmark for Stress-Testing Machine Theory of Mind in Interactions”, Kim et al 2023
FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions
“Branch-Solve-Merge Improves Large Language Model Evaluation and Generation”, Saha et al 2023
Branch-Solve-Merge Improves Large Language Model Evaluation and Generation
“Eureka: Human-Level Reward Design via Coding Large Language Models”, Ma et al 2023
Eureka: Human-Level Reward Design via Coding Large Language Models
“Set-Of-Mark Prompting Unleashes Extraordinary Visual Grounding in GPT-4-V”, Yang et al 2023
Set-of-Mark Prompting Unleashes Extraordinary Visual Grounding in GPT-4-V
“Large Language Model Prediction Capabilities: Evidence from a Real-World Forecasting Tournament”, Schoenegger & Park 2023
Large Language Model Prediction Capabilities: Evidence from a Real-World Forecasting Tournament
“Data Contamination Through the Lens of Time”, Roberts et al 2023
“Can GPT Models Be Financial Analysts? An Evaluation of ChatGPT and GPT-4 on Mock CFA Exams”, Callanan et al 2023
Can GPT models be Financial Analysts? An Evaluation of ChatGPT and GPT-4 on mock CFA Exams
“Beyond Memorization: Violating Privacy Via Inference With Large Language Models”, Staab et al 2023
Beyond Memorization: Violating Privacy Via Inference with Large Language Models
“SWE-Bench: Can Language Models Resolve Real-World GitHub Issues?”, Jimenez et al 2023
SWE-bench: Can Language Models Resolve Real-World GitHub Issues?
“Can a Computer Outfake a Human [personality]?”, Phillips & Robie 2023
“Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models”, Zhou et al 2023
Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models
“FreshLLMs: Refreshing Large Language Models With Search Engine Augmentation”, Vu et al 2023
FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation
“Low-Resource Languages Jailbreak GPT-4”, Yong et al 2023
“An Evolutionary Model of Personality Traits Related to Cooperative Behavior Using a Large Language Model”, Suzuki & Arita 2023
“UltraFeedback: Boosting Language Models With High-Quality Feedback”, Cui et al 2023
UltraFeedback: Boosting Language Models with High-quality Feedback
“MTOB: A Benchmark for Learning to Translate a New Language from One Grammar Book”, Tanzer et al 2023
MTOB: A Benchmark for Learning to Translate a New Language from One Grammar Book
“Embers of Autoregression: Understanding Large Language Models Through the Problem They Are Trained to Solve”, McCoy et al 2023
“The Cambridge Law Corpus: A Corpus for Legal AI Research”, Östling et al 2023
“The Reversal Curse: LLMs Trained on "A Is B" Fail to Learn "B Is A"”, Berglund et al 2023
The Reversal Curse: LLMs trained on "A is B" fail to learn "B is A"
“From Sparse to Dense: GPT-4 Summarization With Chain of Density (CoD) Prompting”, Adams et al 2023
From Sparse to Dense: GPT-4 Summarization with Chain of Density (CoD) Prompting
“Devising and Detecting Phishing: Large Language Models vs. Smaller Human Models”, Heiding et al 2023
Devising and Detecting Phishing: Large Language Models vs. Smaller Human Models
“ExpeL: LLM Agents Are Experiential Learners”, Zhao et al 2023
“LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models”, Guha et al 2023
LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models
“Solving Challenging Math Word Problems Using GPT-4 Code Interpreter With Code-Based Self-Verification”, Zhou et al 2023
“OpenAI Cribbed Our Tax Example, But Can GPT-4 Really Do Tax?”, Blair-Stanek et al 2023
OpenAI Cribbed Our Tax Example, But Can GPT-4 Really Do Tax?
“Testing GPT-4 With Wolfram Alpha and Code Interpreter Plug-Ins on Math and Science Problems”, Davis & Aaronson 2023
Testing GPT-4 with Wolfram Alpha and Code Interpreter plug-ins on math and science problems
“The ConceptARC Benchmark: Evaluating Understanding and Generalization in the ARC Domain”, Moskvichev et al 2023
The ConceptARC Benchmark: Evaluating Understanding and Generalization in the ARC Domain
“I’m a Screenwriter. These AI Jokes Give Me Nightmares”, Rich 2023
“A LLM Assisted Exploitation of AI-Guardian”, Carlini 2023
“OpenAI Worries About What Its Chatbot Will Say About People’s Faces: An Advanced Version of ChatGPT Can Analyze Images and Is Already Helping the Blind. But Its Ability to Put a Name to a Face Is One Reason the Public Doesn’t Have Access to It”, Hill 2023
“GPT-4, an Artificial Intelligence Large Language Model, Exhibits High Levels of Accuracy on Dermatology Specialty Certificate Exam Questions”, Shetty et al 2023
“Distilling Large Language Models for Biomedical Knowledge Extraction: A Case Study on Adverse Drug Events”, Gu et al 2023
“Explaining Competitive-Level Programming Solutions Using LLMs”, Li et al 2023
Explaining Competitive-Level Programming Solutions using LLMs
“Hoodwinked: Deception and Cooperation in a Text-Based Game for Language Models”, O’Gara 2023
Hoodwinked: Deception and Cooperation in a Text-Based Game for Language Models
“LeanDojo: Theorem Proving With Retrieval-Augmented Language Models”, Yang et al 2023
LeanDojo: Theorem Proving with Retrieval-Augmented Language Models
“ARIES: A Corpus of Scientific Paper Edits Made in Response to Peer Reviews”, D’Arcy et al 2023
ARIES: A Corpus of Scientific Paper Edits Made in Response to Peer Reviews
“Evaluating Superhuman Models With Consistency Checks”, Fluri et al 2023
“Evaluating the Robustness of Text-To-Image Diffusion Models against Real-World Attacks”, Gao et al 2023
Evaluating the Robustness of Text-to-image Diffusion Models against Real-world Attacks
“ChessGPT: Bridging Policy Learning and Language Modeling”, Feng et al 2023
“Large Language Models As Tax Attorneys: A Case Study in Legal Capabilities Emergence”, Nay et al 2023
Large Language Models as Tax Attorneys: A Case Study in Legal Capabilities Emergence
“Can Large Language Models Democratize Access to Dual-Use Biotechnology?”, Soice et al 2023
Can large language models democratize access to dual-use biotechnology?
“Let’s Verify Step by Step”, Lightman et al 2023
“GPT4GEO: How a Language Model Sees the World’s Geography”, Roberts et al 2023
“LLMs and the Abstraction and Reasoning Corpus: Successes, Failures, and the Importance of Object-Based Representations”, Xu et al 2023
“Learning to Generate Novel Scientific Directions With Contextualized Literature-Based Discovery”, Wang et al 2023
Learning to Generate Novel Scientific Directions with Contextualized Literature-based Discovery
“WikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia”, Semnani et al 2023
“How Language Model Hallucinations Can Snowball”, Zhang et al 2023
“C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models”, Huang et al 2023
C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models
“Large Language Models Can Be Used To Effectively Scale Spear Phishing Campaigns”, Hazell 2023
Large Language Models Can Be Used To Effectively Scale Spear Phishing Campaigns
“Boosting Theory-Of-Mind Performance in Large Language Models via Prompting”, Moghaddam & Honey 2023
Boosting Theory-of-Mind Performance in Large Language Models via Prompting
“Today Was the First Day That I Could Definitively Say That GPT-4 Has Saved Me a Substantial Amount of Tedious Work”, Tao 2023
“Humans in Humans Out: On GPT Converging Toward Common Sense in Both Success and Failure”, Koralus & Wang-Maścianica 2023
Humans in Humans Out: On GPT Converging Toward Common Sense in both Success and Failure
“Advances in Apparent Conceptual Physics Reasoning in GPT-4”, West 2023
“Performance of ChatGPT on Free-Response, Clinical Reasoning Exams”, Strong et al 2023
Performance of ChatGPT on free-response, clinical reasoning exams
“Reflexion: Language Agents With Verbal Reinforcement Learning”, Shinn et al 2023
Reflexion: Language Agents with Verbal Reinforcement Learning
“How Well Do Large Language Models Perform in Arithmetic Tasks?”, Yuan et al 2023
How well do Large Language Models perform in Arithmetic tasks?
“GPT-4 Technical Report § Limitations: Calibration”, OpenAI 2023 (page 12 org openai)
“Salesforce Announces Einstein GPT, the World’s First Generative AI for CRM”, Salesforce 2023
Salesforce Announces Einstein GPT, the World’s First Generative AI for CRM
“Large Language Models Are State-Of-The-Art Evaluators of Translation Quality”, Kocmi & Federmann 2023
Large Language Models Are State-of-the-Art Evaluators of Translation Quality
“Not What You’ve Signed up For: Compromising Real-World LLM-Integrated Applications With Indirect Prompt Injection”, Greshake et al 2023
“Harvey, Which Uses AI to Answer Legal Questions, Lands Cash from OpenAI”, Wiggers 2022
Harvey, which uses AI to answer legal questions, lands cash from OpenAI
“Trading Off Compute in Training and Inference”
“Connecting the Dots: LLMs Can Infer & Verbalize Latent Structure from Training Data”
Connecting the Dots: LLMs can Infer & Verbalize Latent Structure from Training Data
“Language Models Model Us”
“AI Will Increase the Quantity—And Quality—Of Phishing Scams”
Sort By Magic
Annotations sorted by machine learning into inferred 'tags'. This provides an alternative way to browse: instead of by date order, one can browse in topic order. The 'sorted' list has been automatically clustered into multiple sections & auto-labeled for easier browsing.
Beginning with the newest annotation, it uses the embedding of each annotation to attempt to create a list of nearest-neighbor annotations, creating a progression of topics. For more details, see the link.
chatbot-ethics
gpt4-performance
jailbreak
Miscellaneous
-
/doc/ai/nn/transformer/gpt/codex/2024-03-07-inflection-inflection25benchmarks.svg
-
https://betonit.substack.com/p/gpt-4-takes-a-new-midterm-and-gets
-
https://blog.matteskridge.com/business/gpt4-and-silicon-valley-bank/2023/03/19/
: -
https://blog.mentat.ai/benchmarking-gpt-4-turbo-a-cautionary-tale
-
https://blog.nawaz.org/posts/2024/Jan/llm-assisted-moderation/
: -
https://chat.openai.com/share/04add58f-2052-4b60-ae2a-ab708c29088f
: -
https://chatgpt.com/share/312e82f0-cc5e-47f3-b368-b2c0c0f4ad3f
-
https://clarifycapital.com/the-future-of-investment-pitching
: -
https://cookbook.openai.com/examples/tag_caption_images_with_gpt4v
-
https://finedataproducts.com/posts/2024-03-10-tax-scenarios-with-ai/
-
https://generallyintelligent.substack.com/p/fine-tuning-mistral-7b-on-magic-the
-
https://gist.github.com/Jessime/63f93215faed6f7109c6d62b7fef7fbc
: -
https://gist.github.com/harryaskham/68a611bef777525991790bca2f2d324d
-
https://github.com/E-xyza/Exonerate/blob/master/bench/reports/gpt-bench.md
: -
https://github.com/jujumilk3/leaked-system-prompts/blob/main/microsoft-bing-chat_20230209.md
: -
https://github.com/jujumilk3/leaked-system-prompts/blob/main/openai-assistants-api_20231106.md
: -
https://github.com/jujumilk3/leaked-system-prompts/blob/main/openai-chatgpt-ios_20230614.md
: -
https://github.com/jujumilk3/leaked-system-prompts/blob/main/openai-chatgpt4-android_20240207.md
: -
https://github.com/jujumilk3/leaked-system-prompts/blob/main/openai-chatgpt_20221201.md
: -
https://github.com/kagisearch/llm-chess-puzzles?tab=readme-ov-file#results
-
https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2812620
-
https://kenkantzer.com/lessons-after-a-half-billion-gpt-tokens/
: -
https://koenvangilst.nl/blog/keeping-code-complexity-in-check
-
https://lemire.me/blog/2023/03/22/can-gpt-pass-my-programming-courses/
: -
https://matthewbarnett.substack.com/p/gpt-4-takes-bryan-caplans-midterm
-
https://mazzzystar.github.io/2023/05/10/LLM-for-individual/
: -
https://micahflee.com/2023/04/capturing-the-flag-with-gpt-4/
-
https://openai.com/blog/function-calling-and-other-api-updates#function-calling
-
https://paperswithcode.com/sota/math-word-problem-solving-on-math
-
https://pslusarz.github.io/articles/2023/12/22/compare-ocr-tesseract-gpt4-nara-rolls.html
: -
https://statmodeling.stat.columbia.edu/2023/04/18/chatgpt4-writes-stan-code-so-i-dont-have-to/
-
https://statmodeling.stat.columbia.edu/2023/08/20/bob-carpenter-thinks-gpt-4-is-awesome/
: -
https://terrytao.wordpress.com/about/ai-generated-versions-of-the-ai-anthology-article/
: -
https://villekuosmanen.medium.com/i-played-chess-against-chatgpt-4-and-lost-c5798a9049ca
: -
https://www.construction-physics.com/p/could-chatgpt-become-an-architect
: -
https://www.economist.com/business/2024/02/29/how-businesses-are-actually-using-generative-ai
: -
https://www.euractiv.com/section/politics/news/albania-to-speed-up-eu-accession-using-chatgpt/
-
https://www.geoffreylitt.com/2023/03/25/llm-end-user-programming
-
https://www.lesswrong.com/posts/CkhJAxHeyFCg2EcET/are-language-models-good-at-making-predictions
: -
https://www.lesswrong.com/posts/KSroBnxCHodGmPPJ8/jailbreaking-gpt-4-s-code-interpreter
-
https://www.lesswrong.com/posts/doPbyzPgKdjedohud/the-case-for-more-ambitious-language-model-evals
: -
https://www.oneusefulthing.org/p/it-is-starting-to-get-strange
-
https://www.oneusefulthing.org/p/setting-time-on-fire-and-the-temptation
: -
https://www.reddit.com/r/ChatGPT/comments/12a0ajb/i_gave_gpt4_persistent_memory_and_the_ability_to/
-
https://www.reddit.com/r/GPT3/comments/12ez822/neurosemantical_inversitis_prompt_still_works/
-
https://www.reddit.com/r/bing/comments/110eagl/the_customer_service_of_the_new_bing_chat_is/
: -
https://www.reddit.com/r/duolingo/comments/18sx06i/big_layoff_at_duolingo/
: -
https://www.reddit.com/r/freelanceWriters/comments/12ff5mw/it_happened_to_me_today/
: -
https://www.reddit.com/r/singularity/comments/1atjz9v/ive_put_a_complex_codebase_into_a_single/
-
https://www.supersimple.io/blog/gpt-4-fine-tuning-early-access
-
https://www.thebigquestions.com/2023/04/05/gpt-4-fails-economics/
:
Link Bibliography
-
https://arxiv.org/abs/2406.11233
: “Probing the Decision Boundaries of In-Context Learning in Large Language Models”, -
https://arxiv.org/abs/2405.15143
: “Intelligent Go-Explore (IGE): Standing on the Shoulders of Giant Foundation Models”, -
https://arxiv.org/abs/2405.15306
: “DeTikZify: Synthesizing Graphics Programs for Scientific Figures and Sketches With TikZ”, -
https://arxiv.org/abs/2405.15071
: “Grokked Transformers Are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization”, -
https://www.theverge.com/2024/5/13/24155652/chatgpt-voice-mode-gpt4o-upgrades
: “ChatGPT Will Be Able to Talk to You like Scarlett Johansson in Her / Upgrades to ChatGPT’s Voice Mode Bring It Closer to the Vision of a Responsive AI Assistant—And Sam Altman Seems to Know It”, -
https://arxiv.org/abs/2405.00332#scale
: “GSM1k: A Careful Examination of Large Language Model Performance on Grade School Arithmetic”, -
https://arxiv.org/abs/2404.07544
: “From Words to Numbers: Your Large Language Model Is Secretly A Capable Regressor When Given In-Context Examples”, -
https://www.wired.com/story/ai-chatbots-foia-requests-election-workers/
: “Election Workers Are Drowning in Records Requests. AI Chatbots Could Make It Worse: Experts Worry That Election Deniers Could Weaponize Chatbots to Overwhelm and Slow down Local Officials”, -
https://link.springer.com/article/10.1007/s10506-024-09396-9
: “Re-Evaluating GPT-4’s Bar Exam Performance”, -
https://www.wsj.com/tech/ai/a-peter-thiel-backed-ai-startup-cognition-labs-seeks-2-billion-valuation-998fa39d
: “A Peter Thiel-Backed AI Startup, Cognition Labs, Seeks $2 Billion Valuation: Funding round Could Increase Startup’s Valuation Nearly Sixfold in a Matter of Weeks, Reflecting AI Frenzy”, -
https://arxiv.org/abs/2403.18624
: “Vulnerability Detection With Code Language Models: How Far Are We?”, -
https://www.bloomberg.com/news/articles/2024-03-12/cognition-ai-is-a-peter-thiel-backed-coding-assistant
: “Gold-Medalist Coders Build an AI That Can Do Their Job for Them: A New Startup Called Cognition AI Can Turn a User’s Prompt into a Website or Video Game”, -
https://arxiv.org/abs/2402.19450
: “Functional Benchmarks for Robust Evaluation of Reasoning Performance, and the Reasoning Gap”, -
https://arxiv.org/abs/2402.14903
: “Tokenization Counts: the Impact of Tokenization on Arithmetic in Frontier LLMs”, -
https://arxiv.org/abs/2402.11753
: “ArtPrompt
: ASCII Art-Based Jailbreak Attacks against Aligned LLMs”, -
https://arxiv.org/abs/2402.11349
: “Tasks That Language Models Don’t Learn”, -
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10894685/
: “GPT-4 Passes the Bar Exam”, -
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10936766/
: “Large Language Models Are Able to Downplay Their Cognitive Abilities to Fit the Persona They Simulate”, -
https://arxiv.org/abs/2312.08926
: “PRER: Modeling Complex Mathematical Reasoning via Large Language Model Based MathAgent”, -
https://arxiv.org/abs/2311.16452#microsoft
: “Can Generalist Foundation Models Outcompete Special-Purpose Tuning? Case Study in Medicine”, -
https://arxiv.org/abs/2311.09247
: “Comparing Humans, GPT-4, and GPT-4-V On Abstraction and Reasoning Tasks”, -
https://arxiv.org/abs/2310.13014
: “Large Language Model Prediction Capabilities: Evidence from a Real-World Forecasting Tournament”, -
https://arxiv.org/abs/2310.08678
: “Can GPT Models Be Financial Analysts? An Evaluation of ChatGPT and GPT-4 on Mock CFA Exams”, -
2023-phillips.pdf
: “Can a Computer Outfake a Human [personality]?”, -
https://arxiv.org/abs/2310.04406
: “Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models”, -
https://arxiv.org/abs/2310.03214#google
: “FreshLLMs: Refreshing Large Language Models With Search Engine Augmentation”, -
https://arxiv.org/abs/2310.01377
: “UltraFeedback: Boosting Language Models With High-Quality Feedback”, -
https://arxiv.org/abs/2309.12269
: “The Cambridge Law Corpus: A Corpus for Legal AI Research”, -
https://arxiv.org/abs/2309.12288
: “The Reversal Curse: LLMs Trained on "A Is B" Fail to Learn "B Is A"”, -
https://arxiv.org/abs/2309.04269
: “From Sparse to Dense: GPT-4 Summarization With Chain of Density (CoD) Prompting”, -
https://arxiv.org/abs/2308.12287
: “Devising and Detecting Phishing: Large Language Models vs. Smaller Human Models”, -
https://arxiv.org/abs/2308.07921
: “Solving Challenging Math Word Problems Using GPT-4 Code Interpreter With Code-Based Self-Verification”, -
https://time.com/6301288/the-ai-jokes-that-give-me-nightmares/
: “I’m a Screenwriter. These AI Jokes Give Me Nightmares”, -
https://www.nytimes.com/2023/07/18/technology/openai-chatgpt-facial-recognition.html
: “OpenAI Worries About What Its Chatbot Will Say About People’s Faces: An Advanced Version of ChatGPT Can Analyze Images and Is Already Helping the Blind. But Its Ability to Put a Name to a Face Is One Reason the Public Doesn’t Have Access to It”, -
https://arxiv.org/abs/2307.06439#microsoft
: “Distilling Large Language Models for Biomedical Knowledge Extraction: A Case Study on Adverse Drug Events”, -
https://arxiv.org/abs/2308.01404
: “Hoodwinked: Deception and Cooperation in a Text-Based Game for Language Models”, -
https://arxiv.org/abs/2306.15626
: “LeanDojo: Theorem Proving With Retrieval-Augmented Language Models”, -
https://arxiv.org/abs/2306.12587
: “ARIES: A Corpus of Scientific Paper Edits Made in Response to Peer Reviews”, -
https://arxiv.org/abs/2305.20050#openai
: “Let’s Verify Step by Step”, -
https://arxiv.org/abs/2305.18354
: “LLMs and the Abstraction and Reasoning Corpus: Successes, Failures, and the Importance of Object-Based Representations”, -
https://arxiv.org/abs/2305.13534
: “How Language Model Hallucinations Can Snowball”, -
https://arxiv.org/abs/2305.06972
: “Large Language Models Can Be Used To Effectively Scale Spear Phishing Campaigns”, -
https://arxiv.org/abs/2304.11490
: “Boosting Theory-Of-Mind Performance in Large Language Models via Prompting”, -
https://www.medrxiv.org/content/10.1101/2023.03.24.23287731.full
: “Performance of ChatGPT on Free-Response, Clinical Reasoning Exams”, -
https://arxiv.org/abs/2304.02015#alibaba
: “How Well Do Large Language Models Perform in Arithmetic Tasks?”, -
https://arxiv.org/pdf/2303.08774#page=12&org=openai
: “GPT-4 Technical Report § Limitations: Calibration”, -
https://arxiv.org/abs/2302.14520
: “Large Language Models Are State-Of-The-Art Evaluators of Translation Quality”, -
https://arxiv.org/abs/2302.12173
: “Not What You’ve Signed up For: Compromising Real-World LLM-Integrated Applications With Indirect Prompt Injection”, -
https://techcrunch.com/2022/11/23/harvey-which-uses-ai-to-answer-legal-questions-lands-cash-from-openai/
: “Harvey, Which Uses AI to Answer Legal Questions, Lands Cash from OpenAI”,