gpt-2#gpt-2-1-5b
[Transclude the forward-link's
context]
GPT-3: Language Models are Few-Shot Learners
GPT-1: Improving Language Understanding with Unsupervised Learning
Better Language Models and Their Implications
gpt-2#training-gpt-2-poetry-prefix
[Transclude the forward-link's
context]
gpt-2#gpt-2-345m
[Transclude the forward-link's
context]
twdne#text
[Transclude the forward-link's
context]
GPT-2 Folk Music
GPT-2 Preference Learning for Music Generation
A Very Unlikely Chess Game
Update: Upgrading to 1.5B GPT-2, and adding 22 new subreddit-bots
GPT-3 paper § Figure F.1: Four uncurated completions from a context suggesting the model compose a poem in the style of Wallace Stevens with the title ‘Shadows on the Way’
GPT-3 Github JSON Dump Reformatted to Readable HTML
OpenAI API Beta homepage
AI Dungeon 2
AI Dungeon 2: Dragon Model Upgrade—You Can Now Play AI Dungeon With One of the Most Powerful AI Models in the World.
I’ve Been Testing the Largest of @OpenAI’s Models With AI Dungeon and Been Constantly Impressed at How Interesting and Dynamic the Characters Are, like This Queen, Long Thought to Be Dead, Hiding from Enemies and Not Happy about Me Prying into Her Personal Life.
Excel_tabulate_v3_biz on Vimeo
https://cdn.openai.com/API/English_Bash_Python.mp4
The AI Channels Project
‘AI|Writer': an AI | Channels Project by @AndrewMayne Using the OpenAI API; 'AI|Writer’ Is an Experiment Using Artificial Intelligence to Create Simulated Hypothetical Correspondence With Famous Personalities, Both Real and Fictitious
Hi @ID_AA_Carmack, This Is My Attempt to Learn How to Move General AI Forward. I Used OpenAI‘s GPT-3 Beta API to Incarnate a Version of You from the Future. I Am Shocked at GPT-3’s Responses, Especially How It Introduced You. All of the Bold Text Is 100% Generated by the Model
OpenAI API Alchemy: Summarization
‘Simplify: Simple, Easy-To-Understand Explanations for Everything’, Chris Lu
https://x.com/Wattenberger/status/1412480516268437512
Introducing AI Dungeon Translate: AI Dungeon Players Can Now Translate Their Stories into Emojis by Just Clicking a Button. [ 🤔 💯 🤷♂️ 🤔 🤔 🤔 💯]
OpenAI API Alchemy: Emoji Storytelling 🤖
Multimodal Few-Shot Learning with Frozen Language Models
OpenAI API Alchemy: Turn a Script into a Novel (And vice Versa)
Say Goodbye to Painful Email Reading and Writing: Magic Email Is Your AI-Powered Email Assistant That Summarises Your Emails and Generates Professional Emails from Brief One-Line Descriptions. Get through All of Your Emails 5x Faster so You Can Free up More Time for Your Important Work.
https://x.com/michaeltefula/status/1285505897108832257
OpenAI API Alchemy: Smart Formatting and Code Creation
I Made a Fully Functioning Search Engine on top of GPT-3. For Any Arbitrary Query, It Returns the Exact Answer AND the Corresponding URL. Look at the Entire Video. It’s MIND BLOWINGLY Good.
Interactive Decomposition of Forecasting Questions Using GPT-3. All Questions Auto-Generated. Part of Our Work on Tools for Thought @oughtinc.
ETHICS: Aligning AI With Shared Human Values
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
https://x.com/jephjacques/status/1279537349974732800
https://x.com/StarTrekAI
Unlike That other Guy Who Just Wrote Silly Things and Lied to Pass Them off As the Work of an AI, I Actually did Get the GPT-3 Language Model to Generate New Seinfeld Scripts. Behold: 4 New Puffy Shirt Episodes. (The First 5 Lines Are Canon, the Rest New)
Gpt-3-Experiments/examples at Master
This Is the OpenAI API. It Makes Spookily Good Twitter Bots. 13⁄10 Would Retweet
A 10,000 Year Warning
Expert judgment on markers to deter inadvertent human intrusion into the Waste Isolation Pilot Plant
Fiction by Neil Gaiman and Terry Pratchett by GPT-3
GPT-3: An AI That's Eerily Good at Writing Almost Anything
Elon Musk By Dr. Seuss (GPT-3)
A Wild Adventure With GPT-3: Featuring Indian Mythology and Neruda
Apropos of nothing
https://www.youtube.com/watch?v=7Y5KsN6ehvk
Love Letters, Written by a Toaster. The Poetic Power of Artificial Intelligence (GPT-3)
Singular: Possible futures of the singularity
An Essay about Artificial Intelligence, Emotional Intelligence, and Finding an Ending
https://x.com/danielbigham/status/1295864369713209351
AI Am I? (The New Aesthetic)
https://x.com/TomerUllman/status/1363851329463087109
https://www.reddit.com/r/aigreentext/
Greentext Stories
GPT-3 Generated These Color Scales, given Some Existing Scales and a Hue Name (Or Emoji‽) As a Prompt. Let That Sink In.
Can Language Models Encode Perceptual Structure Without Grounding? A Case Study in Color
Shared understanding of color among sighted and blind adults
https://x.com/sharifshameem/status/1282676454690451457
I Just Built a functioning React App by Describing What I Wanted to GPT-3. I’m Still in Awe.
I Built a Todo List App Simply by Describing It to GPT-3. It Generated the React Code for a Fully Functioning App within Seconds. I’m Becoming More Impressed and Aware of Its Capabilities Every Single Day.
I Gave GPT-3 Access to Chrome With the Objective ‘Please Buy Me AirPods’…It Successfully Made It to the Product Page, but Got Sidetracked With Walmart’s Privacy Policy. Since Even a Simplified DOM Is Far Too Large for a Single Prompt, Multiple Prompts Are given Different Chunks of the DOM, Each Generating Their Own ‘Interaction’. Another Prompt Then Takes All the Proposed Interactions and Selects the Best One, Sort of like a Tournament Bracket. For More Complex Web Pages, the Time It Takes to Generate an Action Scales at 𝒪(Log n) With the Size of the DOM—Really Fast! It Also Gets around Token Limits, so You Could Technically Process an Infinitely Large DOM!
First Work With GPT-3, I Asked It to Draw an Image. I Gave It Seed SVG Code and Asked It to Generate an SVG Code by Itself. Turns out It Drew Something Resembling a Floppy Disk.
GPT-3 Does The Work™️ on Generating SVG Charts, With a Quick Web App I Built With @billyjeanbillyj. With a Short Sentence Describing What You Want to Plot, Its Able to Generate Charts With Titles, Labels and Legends from about a Dozen Primed Examples.It Works by Compiling the Sentences to Vega-Lite (@vega_vis) by @arvindsatya1, @kanitw, @domoritz, and Jeffery Heer. Vega a High Level Grammar of Interactive Graphics Built for Exploratory Data Visualization.
Starting the Day With a Chart Building Demo. Primed GPT-3 With Chart.js Scripts to Generate the Below.
After Many Hours of Retraining My Brain to Operate in This "Priming" Approach, I Also Now Have a Sick GPT-3 Demo: English to LaTeX Equations! I’m Simultaneously Impressed by Its Coherence and Amused by Its Brittleness—Watch Me Test the Fundamental Theorem of Calculus.
GPT-3 Does The Work™ on Some Business Analyst SQL Queries given Quite a Few Examples from (Https://techbeamers.com/sql-Query-Questions-Answers-For-Practice/). What’s Wildest Is That It Knows a Few Functions like SUBSTR given No Examples in That Syntax. More to Come Re: GPT-3 for Automating Data Analytics Tasks.
Automating My Job With GPT-3: Using GPT-3 Instruct to Generate Database-Ready SQL to Answer Business Questions
Who Models the Models That Model Models? An Exploration of GPT-3’s In-Context Model Fitting Ability
https://www.autoregex.xyz/
This Changes Everything. :Exploding_head: With GPT-3, I Built a Figma Plugin to Design for You. I Call It ‘Designer’
https://web.archive.org/web/20200727092603/https://spronkoid.github.io/recycling/Recyclingisascam.html
https://bramses.notion.site/ERB-of-History-GPT-3-Bot-784e99b7fea0462f95489d74a568c4ad
Design a Role-Playing Game Using 200 Words or Less.
I Was Thinking of Using GPT-3 to Generate ‘200 Word RPGs’ (Tiny Complete Games) but I’m Getting Quite Distracted Watching It play 200 Word RPG Challenge Entries. It Didn’t Account for the Tokens but It Got the General Idea without Any Example Gameplay in the Prompt.
Recommendations For Anything You Want
Predictability and Surprise in Large Generative Models
Turns out GPT-3 Can Do Vision Too 😉 Built an Ingredient Parser: Take a Pic of Any Nutrition Label (Google to Extract Text), and GPT-3 Will Identify Ingredients, Find an Emoji, Determine If It’s Unhealthy, and Give a Definition 🤯
The Best Kept Secret about OpenAI’s GPT-3 – @AndrewMayne
Image GPT (iGPT): We find that, just as a large transformer model trained on language can generate coherent text, the same exact model trained on pixel sequences can generate coherent image completions and samples
The Scaling Hypothesis
Evolution as Backstop for Reinforcement Learning
Decision Transformer: Reinforcement Learning via Sequence Modeling
The Aleph: Borgean Fantastic Hyperreality Revisited by GPT-3
The Unreasonable Effectiveness of Data
RNN Metadata for Mimicking Author Style
Crowdsourcing The Best GPT-2-1.5b Poetry
Passages from the Life of a Philosopher § Chapter 5: ‘Difference Engine No. 1’
GPT-2 Neural Network Poetry
Mechanical Sympathy: Understanding the Hardware Makes You a Better Developer
scaling-hypothesis#meta-learning
[Transclude the forward-link's
context]
https://x.com/karpathy/status/1273788774422441984
https://gptprompts.wikidot.com/linguistics:word-in-context
How Many Data Points is a Prompt Worth?
Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm
How Can We Know What Language Models Know?
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Calibrate Before Use: Improving Few-Shot Performance of Language Models
Technology Forecasting: The Garden of Forking Paths
‘Lizardman survey constant’ directory
Sample #1353
I Asked GPT-3 about Xinjiang and It Broke…The Pro-CCP Responses Seem to Have Worse English, like including ‘the’ in ‘the Stability Maintenance’. Unnecessary Articles Are a Tic of ESL Speakers. The Topic Seems to Prompt GPT to Draw from Either Western or Chinese State Media Sources, With the Politics That Come With It.
Codex: Evaluating Large Language Models Trained on Code: Figure 14: When the Prompt Includes Subtle Bugs, Codex Tends to Produce Worse Code Than It Is Capable of Producing. This Gap Increases With Model Size. Including an Instruction to Write Correct Code Helps a Little but Does Not Fix the Problem. Even With No Examples in the Context, Codex Produces Substantially Worse Code Than It Is Capable Of.
Surprisingly Turing-Complete
Adversarial Reprogramming of Neural Networks
Adversarial Reprogramming of Text Classification Neural Networks
Deep Learning: Classics and Trends: Language Models Are Few-Shot Learners
A Systematic Characterization of Sampling Algorithms for Open-ended Language Generation
Trading Off Diversity and Quality in Natural Language Generation
Scarecrow: A Framework for Scrutinizing Machine Text
The Curious Case of Neural Text Degeneration
Towards a Human-like Open-Domain Chatbot
Language GANs Falling Short
Six Challenges for Neural Machine Translation
Analyzing Uncertainty in Neural Machine Translation
Mamba: Linear-Time Sequence Modeling with Selective State Spaces
https://x.com/mayfer/status/1732269798934106133
https://web.media.mit.edu/~minsky/papers/Why%20programming%20is--.html
‘inner monologue (AI)’ directory
Seems to work
Teaching GPT-3 to do a brute force 'for loop' checking answers also seems to work
Program Synthesis with Large Language Models
I found that getting GPT-3 to add its own "internal monologue" in parentheses to be a helpful strategy…
How to Dramatically Improve the Reasoning Ability of GPT-3
Teaching GPT-3 to Identify Nonsense
GPT-J-6B: 6B JAX-Based Transformer
https://www.reddit.com/r/AIDungeon/comments/i1qhg0/the_dragon_ai_just_got_worse/
I’ve Noticed a Number of People Using AI Dungeon to Test GPT-3’s Abilities. While It’s a Great Way to See How GPT-3 Can Power an Interesting Application, It’s a Poor Test of GPT-3’s Abilities in General. The First Generation of Any Custom Prompt Is Actually GPT-2.
https://x.com/nickwalton00/status/1289970219855708160
Controlling GPT-3 With Logit Bias
Evaluating Different Fewshot Description Prompts on GPT-3
The ‘AI Dungeons’ Dragon Model Is Heavily Path Dependent (Testing GPT-3 on Ethics)
Aurora / AuroraPurgatio
gpt-2#improvements
[Transclude the forward-link's
context]
‘self-attention’ directory
T5: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
GPT-2 Preference Learning for Music Generation § Optimization by Backprop, Not Blackbox
Progressive Generation of Long Text
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
Co-Writing Screenplays and Theatre Scripts with Language Models (Dramatron): An Evaluation by Industry Professionals
Scaling Language Models: Methods, Analysis & Insights from Training Gopher § Table A40: Conversations Can Create the Illusion of Creativity
Announcing GPT-NeoX-20B
https://gist.github.com/moyix/ca4091f16f0b5011bfa8f3f97f705a0d
LaMDA: Language Models for Dialog Applications
https://wordcraft-writers-workshop.appspot.com/stories/diana-hamilton
Most Language Models can be Poets too: An AI Writing Assistant and Constrained Text Generation Studio
Help me write a poem: Instruction Tuning as a Vehicle for Collaborative Poetry Writing (CoPoet)
I Have a Joke but It’s GPT-3 Generated.
I Think I Have Had Enough of These Jokes. Dear GPT-3 I Command You to Generate All Possible Jokes of This Type. GPT-3: Your Wish Is My Command:
https://x.com/wowitsmrinal/status/1287175391040290816
Models In a Spelling Bee: Language Models Implicitly Learn the Character Composition of Tokens
CLIP: Connecting Text and Images: We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision. CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, similar to the ‘zero-shot’ capabilities of GPT-2 and GPT-3
DALL·E 2: Hierarchical Text-Conditional Image Generation with CLIP Latents § 7. Limitations and Risks
Character-Aware Models Improve Visual Text Rendering
What’s AGI, and Why Are AI Experts Skeptical? ChatGPT and other bots have revived conversations on artificial general intelligence. Scientists say algorithms won’t surpass you any time soon
GPT-3 vs Water Cooler Trivia participants: A Human vs Robot Showdown
There Once Was a Really Bad Poet, It Was Automated but You Didn’t Know It
https://x.com/zswitten/status/1390045960663797764
https://www.reddit.com/r/slatestarcodex/comments/1201v68/10word_quote_a_short_and_simple_failure_mode_of/jdjsx43/
LMentry: A Language Model Benchmark of Elementary Language Tasks
https://amistrongeryet.substack.com/p/can-ai-do-my-job
https://amistrongeryet.substack.com/p/gpt-4-capabilities
BPE Blues
BPE Blues+
GPT-2 Folk Music § Spaceless Model
Commas vs Integers
Math: OpenAI API Can Do Some Math out of the Gate, but Most Math It Seems It Has to Learn. Many Times, the Numbers That It Spits out Are Just Random. However, including Different Priming Prompts Can Result in Decent Results.
Analysing Mathematical Reasoning Abilities of Neural Models
Vincent-163/transformer-Arithmetic
Generative Language Modeling for Automated Theorem Proving
Investigating the Limitations of the Transformers with Simple Arithmetic Tasks
Deep-speare: A Joint Neural Model of Poetic Language, Meter and Rhyme
You’re Right, Spaces Make All the Difference! Copycat Is Toast! (Except for the Last One :-) (GPT-3 Output in Red).
Can GPT-3 Make Analogies?
https://x.com/SteveMoraco/status/1293302692832411649
https://x.com/nutanc/status/1293387692755939331
It Just so Happens I Am Watching a 5-Year-Old Right Now. Here Are the Results! / / Q: If Abc Goes to Abd, What Does Pqr Go To? / A: S / / Q: If Abc Goes to Abd, What Does Ppqqrr Go To? / A: Ss / / Q: If Abc Goes to Abd, What Does Mrrjjj Go To? / A: Kkk / Q: If Abc Goes to Abd, What Does Xyz Go To? / A: Now I Know My ABCs, next Time Won’t You Sing With Me! / / Q: If Axbxcx Goes to Abc, What Does Xpxqxr Go To? / A: S / / Hope This Enlightens Someone
Generative Language Modeling for Automated Theorem Proving § Experiments
BERTRAM: Improved Word Embeddings Have Big Impact on Contextualized Model Performance
On Seeing Through and Unseeing: The Hacker Mindset
Tokens Are Definitely Shorter Than English, but the Performance Even Worse. Getting It to Explain Its Thinking, It Clearly Can’t Tell at All Which Sentences/words Sound the Same, Which Is Odd, Since Homonyms Tend to Have the Same Letters in Russian…On the Other Hand Strength of the Model Definitely Not As Good outside of English.
Human: Did You Know That There Is No Country in Africa That Starts With the Lett...
The Bitter Lesson
BPE-Dropout: Simple and Effective Subword Regularization
Unigram LM: Byte Pair Encoding is Suboptimal for Language Model Pretraining
https://ndingwall.github.io/blog/tokenization
CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation
Charformer: Fast Character Transformers via Gradient-based Subword Tokenization
ByT5: Towards a token-free future with pre-trained byte-to-byte models
MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers
Towards End-to-End In-Image Neural Machine Translation
PIXEL: Language Modeling with Pixels
Perceiver: General Perception with Iterative Attention
One Big Net For Everything
face#faq
[Transclude the forward-link's
context]
The Value Equivalence Principle for Model-Based Reinforcement Learning
RL agents Implicitly Learning Human Preferences
What Is The Morning Writing Effect?
Revealing Persona Biases in Dialogue Systems
The Basic AI Drives
The Scaling Hypothesis § It From Byte
EleutherAI Discord Server
https://platform.openai.com/terms-of-use
https://openai.com/index/gpt-4v-system-card/
https://openai.com/index/hello-gpt-4o/
Inverse Scaling Prize: Second Round Winners
How ‘Honest’ Is GPT-3?
epigram#tom-swifties
[Transclude the forward-link's
context]
Humans Who Are Not Concentrating Are Not General Intelligences
Better Babblers
Using GPT-3 to Explain Jokes
Computing Machinery And Intelligence
GPT-3: Its Nature, Scope, Limits, and Consequences
https://www.theintrinsicperspective.com/p/the-banality-of-chatgpt
Prothalamion by Edmund Spenser
Shakespeare's Sonnets
Playing #chess With GPT-3. Built Using Chess.js, Chessboard.js and @OpenAI’s GPT-3. White Is Me, Black Is GPT-3. GPT-3 Went for the Capture First and Did a Castling Move. Amazing!
On the Sizes of OpenAI API Models: ...Ada, Babbage, Curie and Davinci Line up Closely With 350M, 1.3B, 6.7B, and 175B Respectively.
Swifties 3: The Race Is Not To The Swifty
Navy Seal Copypasta
TIFU by trying to make a salad in the microwave
https://www.reddit.com/r/NavySealCopypasta/
https://www.reddit.com/r/GPT3/comments/ukbba5/the_rickrollian_language_of_william_shakespeare/
https://www.reddit.com/r/mlscaling/comments/pa4h0c/ai_can_write_in_english_now_its_learning_other/ha36d60/
https://www.reddit.com/r/GPT3/comments/v8xsy9/artificial_neural_networks_are_making_strides/ibv9nhm/
https://x.com/MagicRealismBot/status/1273659023926022144
410 Deleted by Author
ChatGPT is fun, but it is not funny! Humor is still challenging Large Language Models
https://slatestarscratchpad.tumblr.com/post/621298010168705024/slatestarscratchpad-the-ai-projects-ive-found
Delivering Real-Time AI in the Palm of Your Hand
Politeness Transfer: A Tag and Generate Approach
rnn-metadata#success
[Transclude the forward-link's
context]
https://x.com/MalenaOhl/status/1298816889569914881
CorentinJ/Real-Time-Voice-Cloning: Clone a Voice in 5 Seconds to Generate Arbitrary Speech in Real-Time
Rosebud AI: Build Games at the Speed of Thought. AI Powered Game Development
I Used @OpenAI GPT-3 to Convert Sentences to a Gentler and Non-Confrontational Tone. The Initial Four Input/output Pairs Are Training Examples, and Then I Tested It With Three New Inputs:
Apparently ‘what ho’ is a corruption of…
https://x.com/balzarot/status/1278213982663426048
But for Me, It Was Tuesday
To Be Fair, You Have To Have a Very High IQ to Understand Rick and Morty
Tendies Stories
https://www.reddit.com/r/rational/comments/poixjd/review_the_fall_of_doc_future/hcy7owh/
https://x.com/allgebrah/status/1282438217401339907
https://x.com/allgebrah/status/1282483394484502534
Taking the Hobbits to Isengard
They'Re Taking the Hobbits to Isengard
The Ents’ Marching Song
Back From Yet Another Globetrotting Adventure, Indiana Jones Checks His Mail And Discovers That His Bid For Tenure Has Been Denied
epigram#less-known-mi6-licenses
[Transclude the forward-link's
context]
Jukebox: We’re introducing Jukebox, a neural net that generates music, including rudimentary singing, as raw audio in a variety of genres and artist styles. We’re releasing the model weights and code, along with a tool to explore the generated samples.
‘The Universe Is a Glitch’ (AI-Driven Music Video)
Artbreeder
Meditations on Moloch
[All in green went my love riding]
R-P-O-P-H-E-S-S-A-G-R (Poem + Analysis)
https://x.com/flantz/status/1286380760585375744
Greece, IV
A Psalm of Life by Henry Wadsworth Longfellow
Ode on Intimations of Immortality from Recollections of Early Childhood by William Wordsworth
Don Juan (Byron, Unsourced)/Canto the Third
https://papergains.co/pdfs/Transformer_Poetry-978-1-7341647-0-1.pdf#page=3
https://x.com/OwainEvans_UK/status/1292190171237175298
Poetry Will Not Optimize, or What Is Literature to AI? § Pg7
Nevermore. / Made With @midjourney / @images_ai ✨ / #AIart #aiartcommunity #artwork #Artists / #artist #AIartwork #generativeart #art
https://x.com/midjourney
‘CLIP’ directory
‘diffusion model’ directory
https://x.com/zoink/status/1289076947629125632
Part 1: AI that writes—GPT-3: a big step forward
https://sevensecularsermons.org/about/
https://openai.com/blog/gpt-3-edit-insert/
https://www.reddit.com/r/promptengineers/comments/thxnsx/from_gpt3s_new_edit_mode_it_can_fill_in_acrostic/
Acrostic Poem Examples: Learn to Make Your Own Name or Word Poetry With These Acrostic Poem Examples and a Handy Template
https://www.lesswrong.com/posts/W3DbNmuMJLWRtE5ny/predictions-for-gpt-n?commentId=J22o3qPeYSpc2M2ib
21st Century Chinese Poetry
https://www.reddit.com/r/MachineLearning/comments/1135tir/d_glm_130b_chineseenglish_bilingual_model/
‘instruct-tuning LLMs’ directory
Deep reinforcement learning from human preferences
The First Sally (A), Or, Trurl’s Electronic Bard
The First Sally (A), Or, Trurl’s Electronic Bard § Love And Tensor Algebra
Seduced, Shaggy Samson Snored: The Fictional Machine That Generated Poems, and the Real People Who Had to Translate Them
https://x.com/emollick/status/1626316207229132800
Looking for Grammar in All the Right Places
Interpreting GPT: the Logit Lens
Steve Omohundro on GPT-3
Dare To Be Stupid
https://tvtropes.org/pmwiki/pmwiki.php/Platform/FimfictionDotNet
Friendship Is Optimal
AI Writes My Little Pony Fanfiction (GPT-3)
Harry Potter and the Methods of Rationality
https://hpmor.com/chapter/16
http://www.simpsoncrazy.com/scripts/last-exit
This Waifu Does Not Exist § GPT-3
On the New Forcers of Conscience under the Long Parliament
https://www.reddit.com/r/GPT3/comments/ith31k/have_bad_analogies_been_tried_with_gpt3_some/
Why GPT-3 Matters
Building AGI Using Language Models
https://towardsdatascience.com/gpt-3-creative-potential-of-nlp-d5ccae16c1ab
https://www.lesswrong.com/posts/Mzrs4MSi58ujBLbBG/you-can-probably-amplify-gpt3-directly
Machinamenta: Regarding GPT-3's Faculties
Are we in an AI overhang?
OpenAI’s Latest Breakthrough Is Astonishingly Powerful, but Still Fighting Its Flaws
Computers Are Getting Closer to Passing the Turing Test
https://www.reddit.com/r/slatestarcodex/comments/hrx2id/a_collection_of_amazing_things_gpt3_has_done/fy7jl0y/
https://x.com/nikillinit/status/1289281944421711878
Starting a Business Around GPT-3 Is a Bad Idea
Laws of Tech: Commoditize Your Complement
https://www.patreon.com/posts/39864473
GPT-3: Using Fiction to Demonstrate How Prompts Impact Output Quality
https://medium.com/@marcinkraszewski/gpt-3-project-ideas-with-code-5940c275bc41
Context Stuffing
How I Used GPT-3 to Hit Hacker News Front Page 5 times in 3 Weeks
TLDR: I Go from Wanting a Machine Learning Model to Getting That Trained Model, without Actually Having a Dataset.
Want To Reduce Labeling Cost? GPT-3 Can Help
https://www.lesswrong.com/posts/4JeAoTrAuByXGw6zm/updated-how-does-gpt2-s-training-corpus-capture-internet
Thefirstaibook
The Arcadian Cantos- A Poem without an Author- 1st Draft
Generative Models are Unsupervised Predictors of Page Quality: A Colossal-Scale Study
MMLU: Measuring Massive Multitask Language Understanding
What Grades Can AI Get In College?
Musings on Typicality
Can GPT-3 Pass a Writer’s Turing Test?
https://x.com/julianharris/status/1421008325785890825
Computers Learning Humor Is No Joke
Extrapolating to Unnatural Language Processing With GPT-3’s In-Context Learning: The Good, the Bad, and the Mysterious
Pen.el
Exploring GPT-3
Post-History Is Written by the Martyrs
https://x.com/goodside
https://www.reddit.com/r/slatestarcodex/comments/hfouw5/gpt3_for_creative_fiction_poetry_dialogue_puns/
https://www.reddit.com/r/MediaSynthesis/comments/hfoulh/gpt3_for_creative_fiction_poetry_dialogue_puns/
https://www.reddit.com/r/HPMOR/comments/hgw2zq/gpt3_neural_net_completions_of_mor_chapter_16/
https://www.reddit.com/r/SubSimulatorGPT2Meta/comments/hl0x18/gwerns_post_on_gpt3_has_some_gold/
GPT-3 Fiction Samples
https://news.ycombinator.com/item?id=23722635
https://news.ycombinator.com/item?id=35633316