-
Gwern.net newsletter (Substack subscription page)
-
February 2021 News
-
‘newsletter’ directory
-
Changelog
-
Gwern Branwen Creating Essays on Gwern.net
-
2021-03-28-gwern-gwernnet-annotations-mobilepopins-darkmode.png
-
2021-04-01-gwern-gwernnet-annotations-popups-recursivewikipediapopups.png
-
Multimodal Neurons in Artificial Neural Networks [CLIP]
-
CLIP: Connecting Text and Images: We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision. CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognized, similar to the ‘zero-shot’ capabilities of GPT-2 and GPT-3
-
The New CLIP Adversarial Examples Are Partially from the Use-Mention Distinction. CLIP Was Trained to Predict Which Caption from a List Matches an Image. It Makes Sense That a Picture of an Apple With a Large ‘IPod’ Label Would Be Captioned With ‘IPod’, Not ‘Granny Smith’!
-
Apple or IPod? Easy Fix for Adversarial Textual Attacks on OpenAI’s CLIP Model!
-
2021-radford-clip-figure4-promptengineering.png
-
Pixels Still Beat Text: Attacking the OpenAI CLIP Model With Text Patches and Adversarial Pixel Perturbations
-
Evolving Reinforcement Learning Algorithms
-
Waymo Simulated Driving Behavior in Reconstructed Fatal Crashes within an Autonomous Vehicle Operating Domain
-
Replaying real life: how the Waymo Driver avoids fatal human crashes
-
Debugging Reinforcement Learning Systems
-
My Reinforcement Learning Learnings
-
Systems that defy detailed understanding § Deep reinforcement Learning
-
ML Scaling subreddit
-
SEER: Self-supervised Pretraining of Visual Features in the Wild
-
Self-Supervised Learning: The Dark Matter of Intelligence
-
Learning from videos to understand the world
-
Contrasting Contrastive Self-Supervised Representation Learning Models
-
Understanding Robustness of Transformers for Image Classification
-
Vision Transformer: An Image is Worth 16×16 Words: Transformers for Image Recognition at Scale
-
https://aiindex.stanford.edu/wp-content/uploads/2021/11/2021-AI-Index-Report_Master.pdf#page=41
-
ChinAI #137: Year 3 of ChinAI: Reflections on the newsworthiness of machine translation
-
The 5-Second Level
-
GPT-3: Language Models are Few-Shot Learners
-
DALL·E 1: Creating Images from Text: We’ve trained a neural network called DALL·E that creates images from text captions for a wide range of concepts expressible in natural language
-
GPT-3 Powers the Next Generation of Apps: Over 300 applications are delivering GPT-3–powered search, conversation, text completion, and other advanced AI features through our API
-
A mathematical theory of semantic development in deep neural networks
-
The Shape of Learning Curves: a Review: 6. Ill-Behaved Learning Curves: 6.1. Phase Transitions
-
An early cell shape transition drives evolutionary expansion of the human forebrain
-
Scientists discover why the human brain is so big: Molecular switch makes human organ three times larger than great apes’, study finds
-
The remarkable, yet not extraordinary, human brain as a scaled-up primate brain and its associated cost
-
Brainiacs, not birdbrains: Crows possess higher intelligence long thought a primarily human attribute
-
Behavioral and Neuronal Representation of Numerosity Zero in the Crow
-
GWAS in almost 195,000 individuals identifies 50 previously unidentified genetic loci for eye color
-
Why Do Wealthy Parents Have Wealthy Children?
-
Nothing in evolution makes sense except in the light of parasites
-
Before a Disastrous Blight, the American Chestnut Was a Keystone Species in Eastern Forests. Could Genetic Engineering Help Bring It Back?
-
Broad cross-national public support for accelerated COVID-19 vaccine trial designs
-
Crystal Prison Zone: I Tried to Report Scientific Misconduct. How Did It Go?
-
Crystal Prison Zone: Smell You Later
-
The Revolution in Classic Tetris: How a younger generation used the Internet to master the falling blocks
-
The Effectiveness of Unreasonable Small Groups
-
Magic, Explanations, and Evil: The Origins and Design of Witches and Sorcerers [and replies]
-
Self-blinding citizen science to explore psychedelic microdosing
-
Positive expectations predict improved mental-health outcomes linked to psychedelic microdosing
-
LSD microdosing RCT
-
Placebo effects in cognitive training
-
https://www.reddit.com/r/electronic_cigarette/comments/lkhewr/usa_vape_mail_ban_newssales_megathread/
-
Can You Ever Be Too Smart for Your Own Good? Linear and Nonlinear Effects of Cognitive Ability
-
Behavioral scientists and laypeople misestimate societal effects of COVID-19
-
Training Working Memory for 2 Years—No Evidence of Latent Transfer to Intelligence
-
Real-time dialogue between experimenters and dreamers during REM sleep
-
Leroy’s elusive little people: A systematic review on lilliputian hallucinations
-
A Group of Orca Outcasts Is Now Dominating an Entire Sea: Killer whales that feast on seals and hunt in small packs are thriving while their widely beloved siblings are dying out
-
Estimation of the total saliva volume produced per day in five-year-old children
-
The esthetic-Usability Effect
-
They Might Never Tell You It’s Broken
-
The Third User, or, Exactly Why Apple Keeps Doing Foolish Things
-
Cameras and Lenses
-
Lights and Shadows
-
Large Batch Simulation for Deep Reinforcement Learning
-
Computer Optimization: Your Computer Is Faster Than You Think
-
The Incredible Boxes of Hock Wah Yeo
-
Stone Walls That Stay Built: A Master Waller Shares How to Dry-Lay Stone Walls That Hold Their Ground for Centuries
-
The Use and Misuse of Income Data and Extreme Poverty in the United States
-
Lizardman Constant in Surveys
-
Is economics performative? Option theory and the construction of derivatives markets
-
Whitewood under Siege
-
Coping with mortality: responses of monkeys and great apes to collapsed, inanimate and dead conspecifics
-