GPT-1: Improving Language Understanding with Unsupervised Learning
gpt-2#training-gpt-2-poetry-prefix
Update: Upgrading to 1.5B GPT-2, and adding 22 new subreddit-bots
GPT-3 paper § Figure F.1: Four uncurated completions from a context suggesting the model compose a poem in the style of Wallace Stevens with the title ‘Shadows on the Way’
AI Dungeon: Dragon Model Upgrade—You Can Now Play AI Dungeon With One of the Most Powerful AI Models in the World.
I'Ve Been Testing the Largest of @OpenAI's Models With AI Dungeon and Been Constantly Impressed at How Interesting and Dynamic the Characters Are, like This Queen, Long Thought to Be Dead, Hiding from Enemies and Not Happy about Me Prying into Her Personal Life.
‘AI|Writer': an AI | Channels Project by @AndrewMayne Using the OpenAI API; 'AI|Writer’ Is an Experiment Using Artificial Intelligence to Create Simulated Hypothetical Correspondence With Famous Personalities, Both Real and Fictitious
Hi @ID_AA_Carmack, This Is My Attempt to Learn How to Move General AI Forward. I Used OpenAI‘s GPT-3 Beta API to Incarnate a Version of You from the Future. I Am Shocked at GPT-3’s Responses, Especially How It Introduced You. All of the Bold Text Is 100% Generated by the Model
‘Simplify: Simple, Easy-To-Understand Explanations for Everything’, Chris Lu
Introducing AI Dungeon Translate: AI Dungeon Players Can Now Translate Their Stories into Emojis by Just Clicking a Button. [ 🤔 💯 🤷♂️ 🤔 🤔 🤔 💯]
OpenAI API Alchemy: Turn a Script into a Novel (and vice Versa)
Say Goodbye to Painful Email Reading and Writing: Magic Email Is Your AI-Powered Email Assistant That Summarises Your Emails and Generates Professional Emails from Brief One-Line Descriptions. Get through All of Your Emails 5x Faster so You Can Free up More Time for Your Important Work.
I Made a Fully Functioning Search Engine on top of GPT-3. For Any Arbitrary Query, It Returns the Exact Answer AND the Corresponding URL. Look at the Entire Video. It’s MIND BLOWINGLY Good.
Interactive Decomposition of Forecasting Questions Using GPT-3. All Questions Auto-Generated. Part of Our Work on Tools for Thought @oughtinc.
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Unlike That OTHER Guy Who Just Wrote Silly Things and Lied to Pass Them off As the Work of an AI, I Actually DID Get the GPT-3 Language Model to Generate New Seinfeld Scripts. Behold: 4 New Puffy Shirt Episodes. (The First 5 Lines Are Canon, the Rest New)
This Is the OpenAI API. It Makes Spookily Good Twitter Bots. 13⁄10 Would Retweet
Expert judgment on markers to deter inadvertent human intrusion into the Waste Isolation Pilot Plant
GPT-3: An AI That's Eerily Good at Writing Almost Anything
A Wild Adventure With GPT-3: Featuring Indian Mythology and Neruda
Love Letters, Written by a Toaster. The Poetic Power of Artificial Intelligence (GPT-3)
An Essay about Artificial Intelligence, Emotional Intelligence, and Finding an Ending
GPT-3 Generated These Color Scales, given Some Existing Scales and a Hue Name (or Emoji‽) As a Prompt. Let That Sink In.
Can Language Models Encode Perceptual Structure Without Grounding? A Case Study in Color
Shared understanding of color among sighted and blind adults
I Just Built a functioning React App by Describing What I Wanted to GPT-3. I’m Still in Awe.
I Built a Todo List App Simply by Describing It to GPT-3. It Generated the React Code for a Fully Functioning App within Seconds. I’m Becoming More Impressed and Aware of Its Capabilities Every Single Day.
I Gave GPT-3 Access to Chrome With the Objective ‘Please Buy Me AirPods’...It Successfully Made It to the Product Page, but Got Sidetracked With Walmart’s Privacy Policy. Since Even a Simplified DOM Is Far Too Large for a Single Prompt, Multiple Prompts Are given Different Chunks of the DOM, Each Generating Their Own ‘Interaction’. Another Prompt Then Takes All the Proposed Interactions and Selects the Best One, Sort of like a Tournament Bracket. For More Complex Web Pages, the Time It Takes to Generate an Action Scales at 𝒪(log n) With the Size of the DOM—Really Fast! It Also Gets around Token Limits, so You Could Technically Process an Infinitely Large DOM!
First Work With #GPT3, I Asked It to Draw an Image. I Gave It Seed SVG Code and Asked It to Generate an SVG Code by Itself. Turns out It Drew Something Resembling a Floppy Disk.
GPT-3 Does The Work™️ on Generating SVG Charts, With a Quick Web App I Built With @billyjeanbillyj. With a Short Sentence Describing What You Want to Plot, Its Able to Generate Charts With Titles, Labels and Legends from about a Dozen Primed Examples.It Works by Compiling the Sentences to Vega-Lite (@vega_vis) by @arvindsatya1, @kanitw, @domoritz, and Jeffery Heer. Vega a High Level Grammar of Interactive Graphics Built for Exploratory Data Visualization.
Starting the Day With a Chart Building Demo. Primed GPT-3 With Chart.js Scripts to Generate the Below.
After Many Hours of Retraining My Brain to Operate in This "Priming" Approach, I Also Now Have a Sick GPT-3 Demo: English to LaTeX Equations! I’m Simultaneously Impressed by Its Coherence and Amused by Its Brittleness—Watch Me Test the Fundamental Theorem of Calculus.
GPT-3 Does The Work™ on Some Business Analyst SQL Queries given Quite a Few Examples from (https://techbeamers.com/sql-Query-Questions-Answers-For-Practice/). What’s Wildest Is That It Knows a Few Functions like SUBSTR given No Examples in That Syntax. More to Come Re: GPT-3 for Automating Data Analytics Tasks.
Automating My Job With GPT-3: Using GPT-3 Instruct to Generate Database-Ready SQL to Answer Business Questions
Who Models the Models That Model Models? An Exploration of GPT-3’s In-Context Model Fitting Ability
This Changes Everything. :Exploding_head: With GPT-3, I Built a Figma Plugin to Design for You. I Call It ‘Designer’
https://web.archive.org/web/20200727092603/https://spronkoid.github.io/recycling/Recyclingisascam.html
https://bramses.notion.site/ERB-of-History-GPT-3-Bot-784e99b7fea0462f95489d74a568c4ad
I Was Thinking of Using #gpt3 to Generate 200 Word RPGs (tiny Complete Games) but I‘M Getting Quite Distracted Watching It *play* 200 Word RPG Challenge Entries. It Didn’t Account for the Tokens but It Got the General Idea without Any Example Gameplay in the Prompt.
Turns out #GPT3 Can Do Vision Too 😉 Built an Ingredient Parser: Take a Pic of Any Nutrition Label (google to Extract Text), and GPT-3 Will Identify Ingredients, Find an Emoji, Determine If It’s Unhealthy, and Give a Definition 🤯
Image GPT (iGPT): We find that, just as a large transformer model trained on language can generate coherent text, the same exact model trained on pixel sequences can generate coherent image completions and samples
Decision Transformer: Reinforcement Learning via Sequence Modeling
The Aleph: Borgean Fantastic Hyperreality Revisited by GPT-3
_Passages from the Life of a Philosopher_ (1864), Ch. 5 ‘Difference Engine No. 1’
Mechanical Sympathy: Understanding the Hardware Makes You a Better Developer
scaling-hypothesis#meta-learning
https://gptprompts.wikidot.com/linguistics:word-in-context
Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Calibrate Before Use: Improving Few-Shot Performance of Language Models
I Asked GPT-3 about Xinjiang and It Broke...The Pro-CCP Responses Seem to Have Worse English, like including ‘the’ in ‘the Stability Maintenance’. Unnecessary Articles Are a Tic of ESL Speakers. The Topic Seems to Prompt GPT to Draw from Either Western or Chinese State Media Sources, With the Politics That Come With It.
Codex: Evaluating Large Language Models Trained on Code: Figure 14: When the Prompt Includes Subtle Bugs, Codex Tends to Produce Worse Code Than It Is Capable of Producing. This Gap Increases With Model Size. Including an Instruction to Write Correct Code Helps a Little but Does Not Fix the Problem. Even With No Examples in the Context, Codex Produces Substantially Worse Code Than It Is Capable Of.
Adversarial Reprogramming of Text Classification Neural Networks
Deep Learning: Classics and Trends: Language Models Are Few-Shot Learners
A Systematic Characterization of Sampling Algorithms for Open-ended Language Generation
Trading Off Diversity and Quality in Natural Language Generation
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
https://web.media.mit.edu/~minsky/papers/Why%20programming%20is--.html
Teaching GPT-3 to do a brute force 'for loop' checking answers also seems to work
I found that getting GPT-3 to add its own "internal monologue" in parentheses to be a helpful strategy…
How to Dramatically Improve the Reasoning Ability of GPT-3
https://www.reddit.com/r/AIDungeon/comments/i1qhg0/the_dragon_ai_just_got_worse/
I’ve Noticed a Number of People Using AI Dungeon to Test GPT-3’s Abilities. While It’s a Great Way to See How GPT-3 Can Power an Interesting Application, It’s a Poor Test of GPT-3’s Abilities in General. The First Generation of Any Custom Prompt Is Actually GPT-2.
The ‘AI Dungeons’ Dragon Model Is Heavily Path Dependent (testing GPT-3 on Ethics)
Efficient Attention: Breaking The Quadratic Transformer Bottleneck
T5: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
GPT-2 Preference Learning for Music Generation § Optimization by Backprop, Not Blackbox
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
Co-Writing Screenplays and Theatre Scripts with Language Models (Dramatron): An Evaluation by Industry Professionals
Scaling Language Models: Methods, Analysis & Insights from Training Gopher § Table A40: Conversations Can Create the Illusion of Creativity
https://gist.github.com/moyix/ca4091f16f0b5011bfa8f3f97f705a0d
https://wordcraft-writers-workshop.appspot.com/stories/diana-hamilton
Most Language Models can be Poets too: An AI Writing Assistant and Constrained Text Generation Studio
Help me write a poem: Instruction Tuning as a Vehicle for Collaborative Poetry Writing (CoPoet)
I Think I Have Had Enough of These Jokes. Dear GPT-3 I Command You to Generate All Possible Jokes of This Type. GPT-3: Your Wish Is My Command:
Models In a Spelling Bee: Language Models Implicitly Learn the Character Composition of Tokens
CLIP: Connecting Text and Images: We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision. CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, similar to the ‘zero-shot’ capabilities of GPT-2 and GPT-3
DALL·E 2: Hierarchical Text-Conditional Image Generation with CLIP Latents § 7. Limitations and Risks
What’s AGI, and Why Are AI Experts Skeptical? ChatGPT and other bots have revived conversations on artificial general intelligence. Scientists say algorithms won’t surpass you any time soon
GPT-3 vs Water Cooler Trivia participants: A Human vs Robot Showdown
There Once Was a Really Bad Poet, It Was Automated but You Didn’t Know It
https://www.reddit.com/r/slatestarcodex/comments/1201v68/10word_quote_a_short_and_simple_failure_mode_of/jdjsx43/
LMentry: A Language Model Benchmark of Elementary Language Tasks
Math: OpenAI API Can Do Some Math out of the Gate, but Most Math It Seems It Has to Learn. Many Times, the Numbers That It Spits out Are Just Random. However, including Different Priming Prompts Can Result in Decent Results.
Analysing Mathematical Reasoning Abilities of Neural Models
Generative Language Modeling for Automated Theorem Proving
Investigating the Limitations of the Transformers with Simple Arithmetic Tasks
Deep-speare: A Joint Neural Model of Poetic Language, Meter and Rhyme
You’re Right, Spaces Make All the Difference! Copycat Is Toast! (Except for the Last One :-) (GPT-3 Output in Red).
It Just so Happens I Am Watching a 5-Year-Old Right Now. Here Are the Results! / / Q: If Abc Goes to Abd, What Does Pqr Go To? / A: S / / Q: If Abc Goes to Abd, What Does Ppqqrr Go To? / A: Ss / / Q: If Abc Goes to Abd, What Does Mrrjjj Go To? / A: Kkk / Q: If Abc Goes to Abd, What Does Xyz Go To? / A: Now I Know My ABCs, next Time Won’t You Sing With Me! / / Q: If Axbxcx Goes to Abc, What Does Xpxqxr Go To? / A: S / / Hope This Enlightens Someone
Generative Language Modeling for Automated Theorem Proving § Experiments
BERTRAM: Improved Word Embeddings Have Big Impact on Contextualized Model Performance
Tokens Are Definitely Shorter Than English, but the Performance Even Worse. Getting It to Explain Its Thinking, It Clearly Can’t Tell at All Which Sentences/words Sound the Same, Which Is Odd, Since Homonyms Tend to Have the Same Letters in Russian...On the Other Hand Strength of the Model Definitely Not As Good outside of English.
Human: Did You Know That There Is No Country in Africa That Starts With the Lett...
Unigram LM: Byte Pair Encoding is Suboptimal for Language Model Pretraining
CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation
Charformer: Fast Character Transformers via Gradient-based Subword Tokenization
ByT5: Towards a token-free future with pre-trained byte-to-byte models
MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers
The Value Equivalence Principle for Model-Based Reinforcement Learning
Humans Who Are Not Concentrating Are Not General Intelligences
https://www.theintrinsicperspective.com/p/the-banality-of-chatgpt
Playing #chess With GPT-3. Built Using Chess.js, Chessboard.js and @OpenAI’s GPT-3. White Is Me, Black Is GPT-3. GPT-3 Went for the Capture First and Did a Castling Move. Amazing!
On the Sizes of OpenAI API Models: ...Ada, Babbage, Curie and Davinci Line up Closely With 350M, 1.3B, 6.7B, and 175B Respectively.
https://www.reddit.com/r/GPT3/comments/ukbba5/the_rickrollian_language_of_william_shakespeare/
https://www.reddit.com/r/mlscaling/comments/pa4h0c/ai_can_write_in_english_now_its_learning_other/ha36d60/
https://www.reddit.com/r/GPT3/comments/v8xsy9/artificial_neural_networks_are_making_strides/ibv9nhm/
ChatGPT is fun, but it is not funny! Humor is still challenging Large Language Models
https://slatestarscratchpad.tumblr.com/post/621298010168705024/slatestarscratchpad-the-ai-projects-ive-found
CorentinJ/Real-Time-Voice-Cloning: Clone a Voice in 5 Seconds to Generate Arbitrary Speech in Real-Time
Rosebud AI: Build Games at the Speed of Thought. AI Powered Game Development
I Used @OpenAI #GPT3 to Convert Sentences to a Gentler and Non-Confrontational Tone. The Initial Four Input/output Pairs Are Training Examples, and Then I Tested It With Three New Inputs:
To Be Fair, You Have To Have a Very High IQ to Understand Rick and Morty
https://www.reddit.com/r/rational/comments/poixjd/review_the_fall_of_doc_future/hcy7owh/
Back From Yet Another Globetrotting Adventure, Indiana Jones Checks His Mail And Discovers That His Bid For Tenure Has Been Denied
epigram#less-known-mi6-licenses
Jukebox: We’re introducing Jukebox, a neural net that generates music, including rudimentary singing, as raw audio in a variety of genres and artist styles. We’re releasing the model weights and code, along with a tool to explore the generated samples.
Ode on Intimations of Immortality from Recollections of Early Childhood by William Wordsworth
https://papergains.co/pdfs/Transformer_Poetry-978-1-7341647-0-1.pdf#page=3
Poetry Will Not Optimize, or What Is Literature to AI? § Pg7
Nevermore. / Made With @midjourney / @images_ai ✨ / #AIart #aiartcommunity #artwork #Artists / #artist #AIartwork #generativeart #art
https://www.reddit.com/r/promptengineers/comments/thxnsx/from_gpt3s_new_edit_mode_it_can_fill_in_acrostic/
Acrostic Poem Examples: Learn to Make Your Own Name or Word Poetry With These Acrostic Poem Examples and a Handy Template
https://www.lesswrong.com/posts/W3DbNmuMJLWRtE5ny/predictions-for-gpt-n#J22o3qPeYSpc2M2ib
https://www.reddit.com/r/MachineLearning/comments/1135tir/d_glm_130b_chineseenglish_bilingual_model/
The First Sally (A), Or, Trurl’s Electronic Bard § Love And Tensor Algebra
https://tvtropes.org/pmwiki/pmwiki.php/Platform/FimfictionDotNet
On the New Forcers of Conscience under the Long Parliament
https://www.reddit.com/r/GPT3/comments/ith31k/have_bad_analogies_been_tried_with_gpt3_some/
https://towardsdatascience.com/gpt-3-creative-potential-of-nlp-d5ccae16c1ab
https://www.lesswrong.com/posts/Mzrs4MSi58ujBLbBG/you-can-probably-amplify-gpt3-directly
OpenAI’s Latest Breakthrough Is Astonishingly Powerful, but Still Fighting Its Flaws
https://www.reddit.com/r/slatestarcodex/comments/hrx2id/a_collection_of_amazing_things_gpt3_has_done/fy7jl0y/
GPT-3: Using Fiction to Demonstrate How Prompts Impact Output Quality
https://medium.com/@marcinkraszewski/gpt-3-project-ideas-with-code-5940c275bc41
How I Used GPT-3 to Hit Hacker News Front Page 5 times in 3 Weeks
TLDR: I Go from Wanting a Machine Learning Model to Getting That Trained Model, without Actually Having a Dataset.
https://www.lesswrong.com/posts/4JeAoTrAuByXGw6zm/updated-how-does-gpt2-s-training-corpus-capture-internet
Generative Models are Unsupervised Predictors of Page Quality: A Colossal-Scale Study
Extrapolating to Unnatural Language Processing With GPT-3’s In-Context Learning: The Good, the Bad, and the Mysterious
https://www.reddit.com/r/slatestarcodex/comments/hfouw5/gpt3_for_creative_fiction_poetry_dialogue_puns/
https://www.reddit.com/r/MediaSynthesis/comments/hfoulh/gpt3_for_creative_fiction_poetry_dialogue_puns/
https://www.reddit.com/r/HPMOR/comments/hgw2zq/gpt3_neural_net_completions_of_mor_chapter_16/
https://www.reddit.com/r/SubSimulatorGPT2Meta/comments/hl0x18/gwerns_post_on_gpt3_has_some_gold/
Wikipedia Bibliography: