Bibliography (168):

  1. GPT-3 Creative Fiction

  2. gpt-3#prompt-programming

    [Transclude the forward-link's context]

  3. GPT-3 Creative Fiction § BPEs

  4. gpt-3#effective-prompt-programming

    [Transclude the forward-link's context]

  5. GPT-3 Creative Fiction § Acrostics

  6. https://arxiv.org/pdf/2005.14165.pdf#page=23

  7. https://x.com/j_erhardt

  8. Winograd-Style Tasks

  9. AI2 Leaderboard

  10. Context Stuffing

  11. Theoretical Limitations of Self-Attention in Neural Sequence Models

  12. Universal Transformers

  13. Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention

  14. I Think ‘GPT-3 Can’t Do Parity Checking’ Isn’t Quite Right. It Can Clearly Pattern Match the Algorithm, Almost Perfectly. It’s Just a Little Mistake Prone. Here, I Invented a Syntax for Having It Evaluate Parity on Each Pair of Digits. It…almost Gets It Right.

  15. Inspired by an AI Dungeon Example Where Math Is Discussed in Simple Language, I Seem to Be Having Decent Results Here. I Had To… Not Just Say What Parity IS but HOW to Calculate It (‘Count the Number of 1s’) and Then It Sort of Walks Itself through Decently. Tho Kinda Confused

  16. Gpt-Scrolls/scrolls/rephrase/conceptual-Blending.txt at Master Maraoz/gpt-Scrolls

  17. Experimenting With the Ideas of Linguistic Relativity, Specifically the Weak Version of the Sapir–Whorf Hypothesis. GPT-3 Was Able to Generate New Concepts With Seemingly Relevant Unique Spelling.

  18. Asking GPT-3 How Two Things Are Similar:

  19. Make GPT-3 Complete Coq Files.

  20. Gpt-3 ASCII Rabbit / Fish

  21. Asking GPT-3 to Draw ASCII Images Produces the Same Drawings Frequently

  22. https://x.com/repligate/status/1635591172189196289

  23. https://x.com/YitziLitt/status/1632404026657591303

  24. Scarecrow: A Framework for Scrutinizing Machine Text

  25. HTLM: Hyper-Text Pre-Training and Prompting of Language Models

  26. FineWeb: Decanting the Web for the Finest Text Data at Scale

  27. GPT-3 calculating derivatives

  28. EleutherAI

  29. EleutherAI/gpt-Neo: An Implementation of Model Parallel GPT-2 and GPT-3-Style Models Using the Mesh-Tensorflow Library.

  30. The Pile: An 800GB Dataset of Diverse Text for Language Modeling

  31. GPT-3 for Fixing OCR Errors:

  32. On Holy Wars and a Plea for Peace

  33. Technology Holy Wars are Coordination Problems

  34. Long Short-Term Memory

  35. clean-pdf.py

  36. https://openai.com/index/whisper/

  37. tla#blind-spot

    [Transclude the forward-link's context]

  38. https://openai.com/index/hello-gpt-4o/

  39. https://x.com/mattshumer_/status/1636512490195501056

  40. All Your Questions Answered

  41. Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data

  42. https://aclanthology.org/2020.acl-main.463.pdf#page=13

  43. https://aclanthology.org/2020.acl-main.463.pdf#page=14

  44. GPT-3 Creative Fiction § Dare To Be Stupid?

  45. Two tiny terriers chase very large bear out of California home

  46. Staying Safe Around Bears

  47. Bear Attacks

  48. One Man’s Modus Ponens

  49. GPT-2 and the Nature of Intelligence

  50. https://www.lesswrong.com/posts/L5JSMZQvkBAx9MD5A/to-what-extent-is-gpt-3-capable-of-reasoning?commentId=eq6FTwG2yWuBdPofs

  51. General-Purpose Question-Answering with Macaw

  52. Hydrochloric Acid Poisoning: MEDLINEPlus Medical Encyclopedia

  53. Tests Show That the Popular AI Still Has a Poor Grasp of Reality.

  54. Experiments testing GPT-3’s ability at commonsense reasoning: results

  55. A Robot Wrote This Entire Article. Are You Scared Yet, Human? We Asked GPT-3, OpenAI’s Powerful New Language Generator, to Write an Essay for Us from Scratch. The Assignment? To Convince Us Robots Come in Peace | For More about GPT-3 and How This Essay Was Written and Edited, Please Read Our Editor’s Note Below

  56. https://x.com/GaryMarcus/status/1303318742286311429

  57. Q. Why Doesn‘t the API Seem to Have Knowledge about Recent Events? A. The Models’ Training Data That Cuts off in October 2019, so They May Not Have Knowledge of Current Events. We Plan to Add More Continuous Training in the Future.

  58. Ikreymer/cdx-Index-Client: A Command-Line Tool for Using CommonCrawl Index API

  59. Table 2.2: Datasets Used to Train GPT-3. ‘Weight in Training Mix’ Refers to the Fraction of Examples during Training That Are Drawn from a given Dataset, Which We Intentionally Do Not Make Proportional to the Size of the Dataset. As a Result, When We Train for 300 Billion Tokens, Some Datasets Are Seen up to 3.4 times during Training While Other Datasets Are Seen Less Than Once

  60. The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence

  61. Talk:pony

  62. Gary Marcus has co-authored a brief critique of GPT-3

  63. ‘Lizardman survey constant’ directory

  64. https://x.com/GaryMarcus/status/1529623114400681984

  65. ‘inner monologue (AI)’ directory

  66. Large Language Models are Zero-Shot Reasoners

  67. https://medium.com/@ElementalCognition/why-does-ai-get-so-confused-by-language-f5f64a9ef6cc

  68. XLNet: Generalized Autoregressive Pretraining for Language Understanding

  69. It’s All about the Prelude Before the Conversation. You Need to Tell It What the AI Is and Is Not Capable. It’s Not Trying to Be Right, It’s Trying to Complete What It Thinks the AI Would Do ツ

  70. https://x.com/paraschopra/status/1284905727388028928

  71. Teaching GPT-3 to Identify Nonsense

  72. Giving GPT-3 a Turing Test

  73. GPT-3 Knows Both the Correct and the (Plausible) Incorrect Answer to a Question.

  74. GPT-3 (AI Dungeon 2) Is Also Capable of Formulating Some Really Bad Medical Advice. Although so Far I Only Managed to Make It Lie to My Only If It‘s Accompanied by a True Answer. It Doesn’t Want to Lie When It’s the Only Answer It Must to Give. But It’s Capable of Formulating Lies.

  75. GPT-3 Gives Some Interesting True and False Answers to Some Questions. But It’s Important to Note That It Gives opposite Answers Just As Often, I Cheery Picked the Most ‘Sensational’ Ones. Usually It Said the opposite Thing, and It Also Role-Plays Sometimes (Eg. As a Spy)

  76. #gpt3 Has Some Reasonably Impressive Ability Not Only to Detect Nonsense, but to Explain Why Something Is Nonsensical:

  77. Given That GPT-3 Obviously Has a Very Detailed Model of Language and Grammar, I Was Curious to See If It Could Both Correct Grammatical Errors and Explain the Corrections. The Answer Is ‘Yes’, Although It Took More Retries Than I Thought It Would for Explanations:

  78. I found that getting GPT-3 to add its own "internal monologue" in parentheses to be a helpful strategy…

  79. Gpt3_openai_monolog_qa.txt

  80. Really like the Idea of Getting GPT-3 to Introspect on Character’s Internal States (Priming in Bold)

  81. How We Accidentally Gave our Bots Their Personalities

  82. AI Dungeon Players Can Now Translate Their Stories into Emojis by Just Clicking a Button.

  83. https://openai.com/blog/chatgpt/

  84. https://x.com/joshua_saxe/status/1602324297648939008

  85. Artificial Neural Networks Today Are Not Conscious, according to Douglas Hofstadter

  86. Gödel, Escher, Bach author Douglas Hofstadter on the state of AI today § What about AI terrifies you?

  87. Chris Froome, First Man to Cycle through the Eurotunnel Crossing the English Channel

  88. Sudanese Man Who Walked through Channel Tunnel Granted UK Asylum

  89. The Vidette 29 August 1978

  90. ‘Walking on Water’ in the Panama Canal

  91. Contra Hofstadter on GPT-3 Nonsense

  92. https://scale.com/blog/chatgpt-vs-claude

  93. Teaching Models to Express Their Uncertainty in Words

  94. Language Models (Mostly) Know What They Know

  95. Verbal Probability Expressions In National Intelligence Estimates: A Comprehensive Analysis Of Trends From The Fifties Through Post-9/11

  96. 2020-henighan-figure31-qandamodelscaling.jpg

  97. CTRL: A Conditional Transformer Language Model For Controllable Generation

  98. Time-Aware Language Models as Temporal Knowledge Bases

  99. Time Vectors: Time is Encoded in the Weights of Finetuned Language Models

  100. Reassuring

  101. This Is a Python Script As Described in XKCD #1263: ‘Reassuring’. It Generates Thousands of Reassuring Parables about Things Humans Are Better Than Computers at Every Second.

  102. GPT-3’s Completion of the Chinese Room Argument from Searle’s Minds, Brains, and Programs (Original Text Is in Bold)

  103. Why Computers Don’t Need to Match Human Intelligence: With continuing advances in machine learning, it makes less and less sense to compare AI to the human mind

  104. https://x.com/raphamilliere/status/1287047986233708546

  105. Are Humans Intelligent? A Salty AI Op-Ed

  106. Philosophers On GPT-3 (Updated With Replies by GPT-3)

  107. Learning to Learn with Feedback and Local Plasticity

  108. OpenAI API Alchemy: Summarization

  109. https://www.reddit.com/r/IncreasinglyVerbose/

  110. Vectors 3.0: Even More Aphorisms and Ten-Second Essays

  111. I fed the Proverbs of Hell to GPT-3…

  112. Epigrams on Programming

  113. https://www.reddit.com/r/MachineLearning/comments/iaitpu/d_knowledge_discovery_with_gpt3/g1onprj/

  114. Umeshisms

  115. epigram#umeshisms

    [Transclude the forward-link's context]

  116. Can GPT-3 produce new ideas? Partially automating Robin Hanson and others § If you never miss a plane…

  117. https://www.reddit.com/r/boardgames/comments/sbkous/does_this_ai_generated_idea_for_a_board_game/

  118. https://boardgamegeek.com/boardgame/37111/battlestar-galactica-the-board-game

  119. https://boardgamegeek.com/boardgame/240980/blood-on-the-clocktower

  120. Coup: Reformation Board Game

  121. https://boardgamegeek.com/boardgame/150376/dead-of-winter-a-crossroads-game

  122. Good Cop Bad Cop Board Game

  123. Secrets Board Game

  124. https://boardgamegeek.com/boardgame/1678/peloponnesian-war-431-404-bc

  125. Unfathomable Board Game

  126. https://boardgamegeek.com/boardgame/24396/war-on-terror-the-boardgame

  127. Who Should We Eat? Board Game

  128. https://x.com/emollick/status/1652040417104240644

  129. How to Write Usefully

  130. Max Tegmark on How a 'Put-Up-Or-Shut-Up' Resolution Led Him to Work on AI and Algorithmic News Selection

  131. Wikipedia Bibliography:

    1. Anagram

    2. Acrostic

    3. Parity function  :

    4. Geon (psychology)  :

    5. Coq (software)  :

    6. ASCII art

    7. Emoticon  :

    8. WARC (file format)

    9. Template:Which

    10. Snowclone

    11. Top Road, Trenton, New Jersey  :

    12. Common Crawl

    13. Cooperative principle

    14. David Ferrucci  :

    15. English Channel

    16. Channel Tunnel

    17. Channel Tunnel § Illegal attempts to cross and deaths  :

    18. Cycling in the Channel Tunnel  :

    19. Lanthanum  :

    20. Pope John VII of Alexandria  :

    21. Gold

    22. Meitnerium  :

    23. Oganesson  :

    24. Textual criticism § Internal evidence  :

    25. Johann Philipp Reis  :

    26. Antonio Meucci  :

    27. Innocenzo Manzetti  :

    28. Invention of the telephone  :

    29. AI effect  :

    30. Douglas Hofstadter

    31. Gary Marcus

    32. Kai-Fu Lee

    33. Alan Perlis

    34. List of Cowboy Bebop episodes  :

    35. Dwarf Fortress

    36. Mafia (party game)

    37. Betrayal at House on the Hill  :

    38. Max Tegmark