“‘GPT-3 Nonfiction’ Tag”,2020-10-06
![]()
Bibliography for tag
ai/nn/transformer/gpt/3/nonfiction, most recent first: 125 annotations & 48 links (parent).
- See Also
- Links
- “Can LLMs Be Scammed? A Baseline Measurement Study”, et al 2024
- “The Rise of AI-Generated Content in Wikipedia”, et al 2024
- “On Scalable Oversight With Weak LLMs Judging Strong LLMs”, et al 2024
- “APIGen: Automated Pipeline for Generating Verifiable and Diverse Function-Calling Datasets”, et al 2024
- “Connecting the Dots: LLMs Can Infer and Verbalize Latent Structure from Disparate Training Data”, et al 2024
- “Designing a Dashboard for Transparency and Control of Conversational AI”, et al 2024
- “Delving into ChatGPT Usage in Academic Writing through Excess Vocabulary”, et al 2024
- “Do Teachers Spot AI? Evaluating the Detectability of AI-Generated Texts among Student Essays”, et al 2024
- “LLMs Achieve Adult Human Performance on Higher-Order Theory of Mind Tasks”, et al 2024
- “Can Language Models Explain Their Own Classification Behavior?”, et al 2024
- “The Instruction Hierarchy: Training LLMs to Prioritize Privileged Instructions”, et al 2024
- “FABLES: Evaluating Faithfulness and Content Selection in Book-Length Summarization”, et al 2024
- “Vulnerability Detection With Code Language Models: How Far Are We?”, et al 2024
- “The NSA Warns That US Adversaries Free to Mine Private Data May Have an AI Edge: Gilbert Herrera, Who Leads Research at the National Security Agency, Says Large Language Models Are Incredibly Useful—And a Bit of a Headache—For America’s Intelligence Machine”, 2024
- “Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews”, et al 2024
- “Functional Benchmarks for Robust Evaluation of Reasoning Performance, and the Reasoning Gap”, et al 2024
- “Tokenization Counts: the Impact of Tokenization on Arithmetic in Frontier LLMs”, 2024
- “Who Is AI Replacing? The Impact of Generative AI on Online Freelancing Platforms”, et al 2024
- “
ArtPrompt: ASCII Art-Based Jailbreak Attacks against Aligned LLMs”, et al 2024- “Using Counterfactual Tasks to Evaluate the Generality of Analogical Reasoning in Large Language Models”, 2024
- “The Non-Effect of Sampling Temperature on Problem Solving in GPT-3.5/GPT-4”, 2024
- “I Think, Therefore I Am: Benchmarking Awareness of Large Language Models Using AwareBench”, et al 2024
- “Does Using ChatGPT Result in Human Cognitive Augmentation?”, 2024
- “A Vision Check-Up for Language Models”, et al 2024
- “Large Language Models Play StarCraft II: Benchmarks and A Chain of Summarization Approach”, et al 2023
- “TinyGSM: Achieving >80% on GSM8k With Small Language Models”, et al 2023
- “Universal Self-Consistency for Large Language Model Generation”, et al 2023
- “PEARL: Personalizing Large Language Model Writing Assistants With Generation-Calibrated Retrievers”, et al 2023
- “Zero-Shot Goal-Directed Dialogue via RL on Imagined Conversations”, et al 2023
- “InCharacter: Evaluating Personality Fidelity in Role-Playing Agents through Psychological Interviews”, et al 2023
- “Data Contamination Through the Lens of Time”, et al 2023
- “Can GPT Models Be Financial Analysts? An Evaluation of ChatGPT and GPT-4 on Mock CFA Exams”, et al 2023
- “Large Language Models Can Replicate Cross-Cultural Differences in Personality”, et al 2023
- “Beyond Memorization: Violating Privacy Via Inference With Large Language Models”, et al 2023
- “GeoLLM: Extracting Geospatial Knowledge from Large Language Models”, et al 2023
- “Can a Computer Outfake a Human [Personality]?”, 2023
- “Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models”, et al 2023
- “Using Large Language Models for Qualitative Analysis Can Introduce Serious Bias”, et al 2023
- “MTOB: A Benchmark for Learning to Translate a New Language from One Grammar Book”, et al 2023
- “Embers of Autoregression: Understanding Large Language Models Through the Problem They Are Trained to Solve”, et al 2023
- “The Cambridge Law Corpus: A Corpus for Legal AI Research”, Östling et al 2023
- “Assessing the Nature of Large Language Models: A Caution against Anthropocentrism”, 2023
- “A Boy Saw 17 Doctors over 3 Years for Chronic Pain. ChatGPT Found the Diagnosis”, 2023
- “Taken out of Context: On Measuring Situational Awareness in LLMs”, et al 2023
- “Investigating the Existence of ‘Secret Language’ in Language Models”, et al 2023
- “Are Large Language Models a Threat to Digital Public Goods? Evidence from Activity on Stack Overflow”, Rio- et al 2023
- “Machine-Assisted Social Psychology Hypothesis Generation”, et al 2023
- “Distilling Large Language Models for Biomedical Knowledge Extraction: A Case Study on Adverse Drug Events”, et al 2023
- “Unleashing the Emergent Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration”, et al 2023
- “Explaining Competitive-Level Programming Solutions Using LLMs”, et al 2023
- “Lost in the Middle: How Language Models Use Long Contexts”, et al 2023
- “Hoodwinked: Deception and Cooperation in a Text-Based Game for Language Models”, 2023
- “Language Models Are Weak Learners”, et al 2023
- “Understanding Social Reasoning in Language Models With Language Models”, et al 2023
- “Evaluating Superhuman Models With Consistency Checks”, et al 2023
- “Artificial Artificial Artificial Intelligence: Crowd Workers Widely Use Large Language Models for Text Production Tasks”, et al 2023
- “Can Large Language Models Democratize Access to Dual-Use Biotechnology?”, et al 2023
- “Iterative Translation Refinement With Large Language Models”, et al 2023
- “Don’t Want Students to Rely on ChatGPT? Have Them Use It: It’s Easy to Forget How Little Students and Educators Understand Generative AI’s Flaws. Once They Actually Try It Out, They’ll See That It Can’t Replace Them”, 2023
- “The Exciting Potential for ChatGPT in Obstetrics and Gynecology”, et al 2023
- “Do GPTs Produce Less Literal Translations?”, et al 2023
- “The False Promise of Imitating Proprietary LLMs”, et al 2023
- “Learning to Generate Novel Scientific Directions With Contextualized Literature-Based Discovery”, et al 2023
- “How Language Model Hallucinations Can Snowball”, et al 2023
- “LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions”, et al 2023
- “Evaluating Transformer Language Models on Arithmetic Operations Using Number Decomposition”, et al 2023
- “Generative AI at Work”, et al 2023
- “Humans in Humans Out: On GPT Converging Toward Common Sense in Both Success and Failure”, Koralus & Wang-2023
- “Language Models Can Solve Computer Tasks”, et al 2023
- “Performance of ChatGPT on Free-Response, Clinical Reasoning Exams”, et al 2023
- “How Well Do Large Language Models Perform in Arithmetic Tasks?”, et al 2023
- “Larger Language Models Do In-Context Learning Differently”, et al 2023
- “Is ChatGPT a General-Purpose Natural Language Processing Task Solver?”, et al 2023
- “Predicting Consumer Contracts [With GPT-3]”, 2023
- “Use GPT-3 Incorrectly: Reduce Costs 40× and Increase Speed by 5×”, 2023
- “A Judge Just Used ChatGPT to Make a Court Decision: The Case Is the First Time a Court Has Admitted to Using the AI Text Generator’s Answers in a Legal Ruling”, 2023
- “Co-Writing With Opinionated Language Models Affects Users’ Views”, et al 2023
- “The inside Story of ChatGPT: How OpenAI Founder Sam Altman Built the World’s Hottest Technology With Billions from Microsoft”, 2023
- “How Close Is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection”, et al 2023
- “Can GPT-3 Produce New Ideas? Partially Automating Robin Hanson and Others § If You Never Miss a Plane…”, 2023
- “How Does ChatGPT Perform on the United States Medical Licensing Examination? The Implications of Large Language Models for Medical Education and Knowledge Assessment”, et al 2023
- “GPT-3 Takes the Bar Exam”, II & 2022
- “Precise Zero-Shot Dense Retrieval without Relevance Labels”, et al 2022
- “Self-Instruct: Aligning Language Models With Self-Generated Instructions”, et al 2022
- “Emergent Analogical Reasoning in Large Language Models”, et al 2022
- “Harvey, Which Uses AI to Answer Legal Questions, Lands Cash from OpenAI”, 2022
- “LMentry: A Language Model Benchmark of Elementary Language Tasks”, et al 2022
- “Self-Ask: Measuring and Narrowing the Compositionality Gap in Language Models (Bamboogle)”, et al 2022
- “How Persuasive Is AI-Generated Argumentation? An Analysis of the Quality of an Argumentative Text Produced by the GPT-3 AI Text Generator”, 2022
- “Out of One, Many: Using Language Models to Simulate Human Samples”, et al 2022
- “What Does a Platypus Look Like? Generating Customized Prompts for Zero-Shot Image Classification (CuPL)”, et al 2022
- “Using Large Language Models to Simulate Multiple Humans”, et al 2022
- “Limitations of Language Models in Arithmetic and Symbolic Induction”, et al 2022
- “RealTime QA: What’s the Answer Right Now?”, et al 2022
- “GODEL: Large-Scale Pre-Training for Goal-Directed Dialog”, et al 2022
- “Can GPT-3 Write an Academic Paper on Itself, With Minimal Human Input?”, GPT-3 et al 2022 (page 2)
- “NaturalProver: Grounded Mathematical Proof Generation With Language Models”, et al 2022
- “OPT: Open Pre-Trained Transformer Language Models”, et al 2022
- “InstructGPT: Training Language Models to Follow Instructions With Human Feedback”, et al 2022
- “Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?”, et al 2022
- “Impact of Pretraining Term Frequencies on Few-Shot Reasoning”, et al 2022
- “Contracts in the Age of Smart Readers”, 2022
- “Memory-Assisted Prompt Editing to Improve GPT-3 After Deployment”, et al 2022
- “CommonsenseQA 2.0: Exposing the Limits of AI through Gamification”, et al 2022
- “Limits of Using Artificial Intelligence and GPT-3 in Patent Prosecution”, et al 2022
- “What Can a Generative Language Model Answer About a Passage?”, Summers- et al 2021
- “Process for Adapting Language Models to Society (PALMS) With Values-Targeted Datasets”, 2021
- “Scaling Laws for Autoregressive Generative Modeling”, et al 2020
- “GPT-3: Its Nature, Scope, Limits, and Consequences”, 2020
- “MMLU: Measuring Massive Multitask Language Understanding”, et al 2020
- “GPT-3: Language Models Are Few-Shot Learners”, et al 2020
- “Extrapolating to Unnatural Language Processing With GPT-3’s In-Context Learning: The Good, the Bad, and the Mysterious”
- “Janus”
- “Fine-Tuning Is Not Sufficient for Capability Elicitation”
- “Connecting the Dots: LLMs Can Infer & Verbalize Latent Structure from Training Data”
- “Reward Hacking Behavior Can Generalize across Tasks”
- “Who Models the Models That Model Models? An Exploration of GPT-3’s In-Context Model Fitting Ability”
- “GPT-3 Catching Fish in Morse Code”
- “A Robot Wrote This Entire Article. Are You Scared Yet, Human? We Asked GPT-3, OpenAI’s Powerful New Language Generator, to Write an Essay for Us from Scratch. The Assignment? To Convince Us Robots Come in Peace | For More about GPT-3 and How This Essay Was Written and Edited, Please Read Our Editor’s Note Below”
- MelMitchell1
- SRajdev
- bucketofkets
- hamandcheese
- sakun135
- spolu
- Sort By Magic
- Miscellaneous
- Bibliography