“‘GPT-3’ Tag”,2021-09-08
![]()
Bibliography for tag
ai/nn/transformer/gpt/3, most recent first: 4 related tags, 27 annotations, & 18 links (parent).
- See Also
- Gwern
- Links
- “Benchmarking the Performance of Large Language Models on the Cerebras Wafer Scale Engine”, et al 2024
- “RAG vs Fine-Tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture”, et al 2024
- “Inside the Chaos at OpenAI: Sam Altman’s Weekend of Shock and Drama Began a Year Ago, With the Release of ChatGPT”, 2023
- “Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation”, et al 2023
- “Does GPT-4 Pass the Turing Test?”, 2023
- “PAIR: Jailbreaking Black Box Large Language Models in 20 Queries”, et al 2023
- “Fine-Tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!”, et al 2023
- “Non-Determinism in GPT-4 Is Caused by Sparse MoE”, 152334H 2023
- “Large Language Models As Superpositions of Cultural Perspectives”, et al 2023
- “AI Is a Lot of Work: As the Technology Becomes Ubiquitous, a Vast Tasker Underclass Is Emerging—And Not Going Anywhere”, 2023
- “I’m Afraid I Can’t Do That: Predicting Prompt Refusal in Black-Box Generative Language Models”, 2023
- “Speak, Memory: An Archaeology of Books Known to ChatGPT/GPT-4”, et al 2023
- “GPTs Are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models”, et al 2023
- “Why Didn’t DeepMind Build GPT-3?”, 2023
- “OpenAI’s Sam Altman Talks ChatGPT And How Artificial General Intelligence Can ‘Break Capitalism’”, 2023
- “GPT-3 As Knowledge Worker: A Zero-Shot Evaluation of AI CPA Capabilities”, et al 2023
- “Language Models Are Better Than Humans at Next-Token Prediction”, et al 2022
- “HALIE: Evaluating Human-Language Model Interaction”, et al 2022
- “TruthfulQA: Measuring How Models Mimic Human Falsehoods”, et al 2021
- “‘How GPT-3 Is Shaping Our AI Future’ With Sam Altman/Azeem Azhar (The Exponential View), Wednesday 7 October 2020”
- “Scaling Laws for Neural Language Models: Figure 15: Far beyond the Model Sizes We Study Empirically, We Find a Contradiction between Our Equations § Pg17”, 2020 (page 17 org openai)
- “Towards Synthesizing Complex Programs from Input-Output Examples”, et al 2017
- “Genetics of Caffeine Consumption and Responses to Caffeine”, et al 2010
- “Why GPT-3 Matters”, 2024
- “Greg Brockman: OpenAI and AGI”, 2024
- M74108556
- sharifshameem
- Miscellaneous
- Bibliography