GPT-3 Creative Fiction
gpt-3#prompt-programming
[Transclude the forward-link's
context]
GPT-3 Creative Fiction § BPEs
gpt-3#effective-prompt-programming
[Transclude the forward-link's
context]
GPT-3 Creative Fiction § Acrostics
https://arxiv.org/pdf/2005.14165.pdf#page=23
https://x.com/j_erhardt
Winograd-Style Tasks
AI2 Leaderboard
Context Stuffing
Theoretical Limitations of Self-Attention in Neural Sequence Models
Universal Transformers
Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention
I Think ‘GPT-3 Can’t Do Parity Checking’ Isn’t Quite Right. It Can Clearly Pattern Match the Algorithm, Almost Perfectly. It’s Just a Little Mistake Prone. Here, I Invented a Syntax for Having It Evaluate Parity on Each Pair of Digits. It…almost Gets It Right.
Inspired by an AI Dungeon Example Where Math Is Discussed in Simple Language, I Seem to Be Having Decent Results Here. I Had To… Not Just Say What Parity IS but HOW to Calculate It (‘Count the Number of 1s’) and Then It Sort of Walks Itself through Decently. Tho Kinda Confused
Gpt-Scrolls/scrolls/rephrase/conceptual-Blending.txt at Master • Maraoz/gpt-Scrolls
Experimenting With the Ideas of Linguistic Relativity, Specifically the Weak Version of the Sapir–Whorf Hypothesis. GPT-3 Was Able to Generate New Concepts With Seemingly Relevant Unique Spelling.
Asking GPT-3 How Two Things Are Similar:
Make GPT-3 Complete Coq Files.
Gpt-3 ASCII Rabbit / Fish
Asking GPT-3 to Draw ASCII Images Produces the Same Drawings Frequently
https://x.com/repligate/status/1635591172189196289
https://x.com/YitziLitt/status/1632404026657591303
Scarecrow: A Framework for Scrutinizing Machine Text
HTLM: Hyper-Text Pre-Training and Prompting of Language Models
FineWeb: Decanting the Web for the Finest Text Data at Scale
GPT-3 calculating derivatives
EleutherAI
EleutherAI/gpt-Neo: An Implementation of Model Parallel GPT-2 and GPT-3-Style Models Using the Mesh-Tensorflow Library.
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
GPT-3 for Fixing OCR Errors:
On Holy Wars and a Plea for Peace
Technology Holy Wars are Coordination Problems
Long Short-Term Memory
clean-pdf.py
https://openai.com/index/whisper/
tla#blind-spot
[Transclude the forward-link's
context]
https://openai.com/index/hello-gpt-4o/
https://x.com/mattshumer_/status/1636512490195501056
All Your Questions Answered
Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data
https://aclanthology.org/2020.acl-main.463.pdf#page=13
https://aclanthology.org/2020.acl-main.463.pdf#page=14
GPT-3 Creative Fiction § Dare To Be Stupid?
Two tiny terriers chase very large bear out of California home
Staying Safe Around Bears
Bear Attacks
One Man’s Modus Ponens
GPT-2 and the Nature of Intelligence
https://www.lesswrong.com/posts/L5JSMZQvkBAx9MD5A/to-what-extent-is-gpt-3-capable-of-reasoning?commentId=eq6FTwG2yWuBdPofs
General-Purpose Question-Answering with Macaw
Hydrochloric Acid Poisoning: MEDLINEPlus Medical Encyclopedia
Tests Show That the Popular AI Still Has a Poor Grasp of Reality.
Experiments testing GPT-3’s ability at commonsense reasoning: results
A Robot Wrote This Entire Article. Are You Scared Yet, Human? We Asked GPT-3, OpenAI’s Powerful New Language Generator, to Write an Essay for Us from Scratch. The Assignment? To Convince Us Robots Come in Peace | For More about GPT-3 and How This Essay Was Written and Edited, Please Read Our Editor’s Note Below
https://x.com/GaryMarcus/status/1303318742286311429
Q. Why Doesn‘t the API Seem to Have Knowledge about Recent Events? A. The Models’ Training Data That Cuts off in October 2019, so They May Not Have Knowledge of Current Events. We Plan to Add More Continuous Training in the Future.
Ikreymer/cdx-Index-Client: A Command-Line Tool for Using CommonCrawl Index API
Table 2.2: Datasets Used to Train GPT-3. ‘Weight in Training Mix’ Refers to the Fraction of Examples during Training That Are Drawn from a given Dataset, Which We Intentionally Do Not Make Proportional to the Size of the Dataset. As a Result, When We Train for 300 Billion Tokens, Some Datasets Are Seen up to 3.4 times during Training While Other Datasets Are Seen Less Than Once
The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence
Talk:pony
Gary Marcus has co-authored a brief critique of GPT-3
‘Lizardman survey constant’ directory
https://x.com/GaryMarcus/status/1529623114400681984
‘inner monologue (AI)’ directory
Large Language Models are Zero-Shot Reasoners
https://medium.com/@ElementalCognition/why-does-ai-get-so-confused-by-language-f5f64a9ef6cc
XLNet: Generalized Autoregressive Pretraining for Language Understanding
It’s All about the Prelude Before the Conversation. You Need to Tell It What the AI Is and Is Not Capable. It’s Not Trying to Be Right, It’s Trying to Complete What It Thinks the AI Would Do ツ
https://x.com/paraschopra/status/1284905727388028928
Teaching GPT-3 to Identify Nonsense
Giving GPT-3 a Turing Test
GPT-3 Knows Both the Correct and the (Plausible) Incorrect Answer to a Question.
GPT-3 (AI Dungeon 2) Is Also Capable of Formulating Some Really Bad Medical Advice. Although so Far I Only Managed to Make It Lie to My Only If It‘s Accompanied by a True Answer. It Doesn’t Want to Lie When It’s the Only Answer It Must to Give. But It’s Capable of Formulating Lies.
GPT-3 Gives Some Interesting True and False Answers to Some Questions. But It’s Important to Note That It Gives opposite Answers Just As Often, I Cheery Picked the Most ‘Sensational’ Ones. Usually It Said the opposite Thing, and It Also Role-Plays Sometimes (Eg. As a Spy)
#gpt3 Has Some Reasonably Impressive Ability Not Only to Detect Nonsense, but to Explain Why Something Is Nonsensical:
Given That GPT-3 Obviously Has a Very Detailed Model of Language and Grammar, I Was Curious to See If It Could Both Correct Grammatical Errors and Explain the Corrections. The Answer Is ‘Yes’, Although It Took More Retries Than I Thought It Would for Explanations:
I found that getting GPT-3 to add its own "internal monologue" in parentheses to be a helpful strategy…
Gpt3_openai_monolog_qa.txt
Really like the Idea of Getting GPT-3 to Introspect on Character’s Internal States (Priming in Bold)
How We Accidentally Gave our Bots Their Personalities
AI Dungeon Players Can Now Translate Their Stories into Emojis by Just Clicking a Button.
https://openai.com/blog/chatgpt/
https://x.com/joshua_saxe/status/1602324297648939008
Artificial Neural Networks Today Are Not Conscious, according to Douglas Hofstadter
Gödel, Escher, Bach author Douglas Hofstadter on the state of AI today § What about AI terrifies you?
Chris Froome, First Man to Cycle through the Eurotunnel Crossing the English Channel
Sudanese Man Who Walked through Channel Tunnel Granted UK Asylum
The Vidette 29 August 1978
‘Walking on Water’ in the Panama Canal
Contra Hofstadter on GPT-3 Nonsense
https://scale.com/blog/chatgpt-vs-claude
Teaching Models to Express Their Uncertainty in Words
Language Models (Mostly) Know What They Know
Verbal Probability Expressions In National Intelligence Estimates: A Comprehensive Analysis Of Trends From The Fifties Through Post-9/11
2020-henighan-figure31-qandamodelscaling.jpg
CTRL: A Conditional Transformer Language Model For Controllable Generation
Time-Aware Language Models as Temporal Knowledge Bases
Time Vectors: Time is Encoded in the Weights of Finetuned Language Models
Reassuring
This Is a Python Script As Described in XKCD #1263: ‘Reassuring’. It Generates Thousands of Reassuring Parables about Things Humans Are Better Than Computers at Every Second.
GPT-3’s Completion of the Chinese Room Argument from Searle’s Minds, Brains, and Programs (Original Text Is in Bold)
Why Computers Don’t Need to Match Human Intelligence: With continuing advances in machine learning, it makes less and less sense to compare AI to the human mind
https://x.com/raphamilliere/status/1287047986233708546
Are Humans Intelligent? A Salty AI Op-Ed
Philosophers On GPT-3 (Updated With Replies by GPT-3)
Learning to Learn with Feedback and Local Plasticity
OpenAI API Alchemy: Summarization
https://www.reddit.com/r/IncreasinglyVerbose/
Vectors 3.0: Even More Aphorisms and Ten-Second Essays
I fed the Proverbs of Hell to GPT-3…
Epigrams on Programming
https://www.reddit.com/r/MachineLearning/comments/iaitpu/d_knowledge_discovery_with_gpt3/g1onprj/
Umeshisms
epigram#umeshisms
[Transclude the forward-link's
context]
Can GPT-3 produce new ideas? Partially automating Robin Hanson and others § If you never miss a plane…
https://www.reddit.com/r/boardgames/comments/sbkous/does_this_ai_generated_idea_for_a_board_game/
https://boardgamegeek.com/boardgame/37111/battlestar-galactica-the-board-game
https://boardgamegeek.com/boardgame/240980/blood-on-the-clocktower
Coup: Reformation Board Game
https://boardgamegeek.com/boardgame/150376/dead-of-winter-a-crossroads-game
Good Cop Bad Cop Board Game
Secrets Board Game
https://boardgamegeek.com/boardgame/1678/peloponnesian-war-431-404-bc
Unfathomable Board Game
https://boardgamegeek.com/boardgame/24396/war-on-terror-the-boardgame
Who Should We Eat? Board Game
https://x.com/emollick/status/1652040417104240644
How to Write Usefully
Max Tegmark on How a 'Put-Up-Or-Shut-Up' Resolution Led Him to Work on AI and Algorithmic News Selection