“‘Neural Net’ Tag”,2019-08-31
![]()
Bibliography for tag
ai/nn, most recent first: 76 related tags, 343 annotations, & 38 links (parent).
- See Also
- Gwern
- “2019 News”, 2019
- “Research Ideas”, 2017
- “The Neural Net Tank Urban Legend”, 2011
- “Surprisingly Turing-Complete”, 2012
- “Evolution As Backstop for Reinforcement Learning”, 2018
- “ARPA and SCI: Surfing AI”, 2018
- “Computer Optimization: Your Computer Is Faster Than You Think”, 2021
- “Timing Technology: Lessons From The Media Lab”, 2012
- Links
- “Collapse or Thrive? Perils and Promises of Synthetic Data in a Self-Generating World”, et al 2024
- “Why Concepts Are (probably) Vectors”, et al 2024
- “Robin Hanson: Prediction Markets, the Future of Civilization, and Polymathy—#66 § Opposition to DL”, 2024
- “Memorization in Machine Learning: A Survey of Results”, et al 2024
- “Simultaneous Linear Connectivity of Neural Networks modulo Permutation”, et al 2024
- “The Boundary of Neural Network Trainability Is Fractal”, Sohl-2024
- “Tweets to Citations: Unveiling the Impact of Social Media Influencers on AI Research Visibility”, et al 2024
- “Outliers With Opposing Signals Have an Outsized Effect on Neural Network Optimization”, 2023
- “Proving Linear Mode Connectivity of Neural Networks via Optimal Transport”, et al 2023
- “How Deep Is the Brain? The Shallow Brain Hypothesis”, et al 2023
- “Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture”, et al 2023
- “Dynamical versus Bayesian Phase Transitions in a Toy Model of Superposition”, et al 2023
- “Efficient Video and Audio Processing With Loihi 2”, et al 2023
- “Latent State Models of Training Dynamics”, et al 2023
- “Going Beyond Linear Mode Connectivity: The Layerwise Linear Feature Connectivity”, et al 2023
- “Combining Human Expertise With Artificial Intelligence: Experimental Evidence from Radiology”, et al 2023
- “The Architecture of a Biologically Plausible Language Organ”, 2023
- “Adam Accumulation to Reduce Memory Footprints of Both Activations and Gradients for Large-Scale DNN Training”, et al 2023
- “Protecting Society from AI Misuse: When Are Restrictions on Capabilities Warranted?”, 2023
- “Symbolic Discovery of Optimization Algorithms”, et al 2023
- “The Forward-Forward Algorithm: Some Preliminary Investigations”, 2022
- “Self-Stabilization: The Implicit Bias of Gradient Descent at the Edge of Stability”, et al 2022
- “Do Current Multi-Task Optimization Methods in Deep Learning Even Help?”, et al 2022
- “Selective Neutralization and Deterring of Cockroaches With Laser Automated by Machine Vision”, et al 2022
- “Git Re-Basin: Merging Models modulo Permutation Symmetries”, et al 2022
- “Learning With Differentiable Algorithms”, 2022
- “Normalized Activation Function: Toward Better Convergence”, 2022
- “Bugs in the Data: How ImageNet Misrepresents Biodiversity”, 2022
- “The Value of Out-Of-Distribution Data”, et al 2022
- “AniWho: A Quick and Accurate Way to Classify Anime Character Faces in Images”, et al 2022
- “Zeus: Understanding and Optimizing GPU Energy Consumption of DNN Training”, et al 2022
- “Adaptive Gradient Methods at the Edge of Stability”, et al 2022
- “Learning With Combinatorial Optimization Layers: a Probabilistic Approach”, et al 2022
- “What Do We Maximize in Self-Supervised Learning?”, Shwartz- et al 2022
- “Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit”, et al 2022
- “High-Performing Neural Network Models of Visual Cortex Benefit from High Latent Dimensionality”, 2022
- “Perceptein: A Synthetic Protein-Level Neural Network in Mammalian Cells”, et al 2022
- “Predicting Word Learning in Children from the Performance of Computer Vision Systems”, et al 2022
- “Wav2Vec-Aug: Improved Self-Supervised Training With Limited Data”, et al 2022
- “The Slingshot Mechanism: An Empirical Study of Adaptive Optimizers and the Grokking Phenomenon”, et al 2022
- “An Improved One Millisecond Mobile Backbone”, et al 2022
- “Greedy Bayesian Posterior Approximation With Deep Ensembles”, 2022
- “Generating Scientific Claims for Zero-Shot Scientific Fact Checking”, et al 2022
- “Deep Lexical Hypothesis: Identifying Personality Structure in Natural Language”, 2022
- “Gradients without Backpropagation”, et al 2022
- “Towards Scaling Difference Target Propagation by Learning Backprop Targets”, et al 2022
- “M5 Accuracy Competition: Results, Findings, and Conclusions”, et al 2022
- “Formal Analysis of Art: Proxy Learning of Visual Concepts from Style Through Language Models”, et al 2022
- “Silent Bugs in Deep Learning Frameworks: An Empirical Study of Keras and TensorFlow”, et al 2021
- “Artificial Intelligence ‘Sees’ Split Electrons”, 2021
- “Pushing the Frontiers of Density Functionals by Solving the Fractional Electron Problem”, et al 2021
- “Word Golf”, 2021
- “Deep Learning Enables Genetic Analysis of the Human Thoracic Aorta”, et al 2021
- “Why Do Self-Supervised Models Transfer? Investigating the Impact of Invariance on Downstream Tasks”, et al 2021
- “Achieving Human Parity on Visual Question Answering”, et al 2021
- “BC-Z: Zero-Shot Task Generalization With Robotic Imitation Learning”, et al 2021
- “Learning in High Dimension Always Amounts to Extrapolation”, et al 2021
- “The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks”, et al 2021
- “The Structure of Genotype-Phenotype Maps Makes Fitness Landscapes Navigable”, et al 2021
- “Deep Neural Networks and Tabular Data: A Survey”, et al 2021
- “Learning through Atypical “Phase Transitions” in Overparameterized Neural Networks”, et al 2021
- “RAFT: A Real-World Few-Shot Text Classification Benchmark”, et al 2021
- “PPT: Pre-Trained Prompt Tuning for Few-Shot Learning”, et al 2021
- “DART: Differentiable Prompt Makes Pre-Trained Language Models Better Few-Shot Learners”, et al 2021
- “ETA Prediction With Graph Neural Networks in Google Maps”, Derrow- et al 2021
- “Predictive Coding: a Theoretical and Experimental Review”, et al 2021
- “A Connectivity-Constrained Computational Account of Topographic Organization in Primate High-Level Visual Cortex”, et al 2021
- “A Diverse Corpus for Evaluating and Developing English Math Word Problem Solvers”, et al 2021
- “Coarse-To-Fine Q-Attention: Efficient Learning for Visual Robotic Manipulation via Discretisation”, et al 2021
- “Randomness In Neural Network Training: Characterizing The Impact of Tooling”, et al 2021
- “Revisiting Deep Learning Models for Tabular Data”, et al 2021
- “BEiT: BERT Pre-Training of Image Transformers”, et al 2021
- “Revisiting Model Stitching to Compare Neural Representations”, et al 2021
- “Artificial Intelligence in China’s Revolution in Military Affairs”, 2021
- “The Geometry of Concept Learning”, et al 2021
- “VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning”, et al 2021
- “The Modern Mathematics of Deep Learning”, et al 2021
- “Understanding by Understanding Not: Modeling Negation in Language Models”, et al 2021
- “Entailment As Few-Shot Learner”, et al 2021
- “PAWS: Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments With Support Samples”, et al 2021
- “Epistemic Autonomy: Self-Supervised Learning in the Mammalian Hippocampus”, Santos- et al 2021
- “Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization”, et al 2021
- “Contrasting Contrastive Self-Supervised Representation Learning Models”, et al 2021
- “Characterizing and Improving the Robustness of Self-Supervised Learning through Background Augmentations”, et al 2021
- “GWAS in Almost 195,000 Individuals Identifies 50 Previously Unidentified Genetic Loci for Eye Color”, et al 2021
- “BERTese: Learning to Speak to BERT”, et al 2021
- “Predictive Coding Can Do Exact Backpropagation on Any Neural Network”, et al 2021
- “Barlow Twins: Self-Supervised Learning via Redundancy Reduction”, et al 2021
- “WIT: Wikipedia-Based Image Text Dataset for Multimodal Multilingual Machine Learning”, et al 2021
- “The Inverse Variance–flatness Relation in Stochastic Gradient Descent Is Critical for Finding Flat Minima”, 2021
- “Gradient Descent on Neural Networks Typically Occurs at the Edge of Stability”, et al 2021
- “Rip Van Winkle’s Razor: A Simple Estimate of Overfit to Test Data”, 2021
- “Image Completion via Inference in Deep Generative Models”, et al 2021
- “Contrastive Learning Inverts the Data Generating Process”, et al 2021
- “DirectPred: Understanding Self-Supervised Learning Dynamics without Contrastive Pairs”, et al 2021
- “MLGO: a Machine Learning Guided Compiler Optimizations Framework”, et al 2021
- “Facial Recognition Technology Can Expose Political Orientation from Naturalistic Facial Images”, 2021
- “Solving Mixed Integer Programs Using Neural Networks”, et al 2020
- “Sixteen Facial Expressions Occur in Similar Contexts Worldwide”, 2020
- “PiRank: Learning To Rank via Differentiable Sorting”, et al 2020
- “Real-Time Synthesis of Imagined Speech Processes from Minimally Invasive Recordings of Neural Activity”, et al 2020
- “Generalization Bounds for Deep Learning”, Valle-2020
- “Selective Eye-Gaze Augmentation To Enhance Imitation Learning In Atari Games”, et al 2020
- “SimSiam: Exploring Simple Siamese Representation Learning”, 2020
- “Recent Advances in Neurotechnologies With Broad Potential for Neuroscience Research”, Vázquez- et al 2020
- “Voting for Authorship Attribution Applied to Dark Web Data”, 2020
- “Twenty Years Beyond the Turing Test: Moving Beyond the Human Judges Too”, Hernández-2020
- “Hypersim: A Photorealistic Synthetic Dataset for Holistic Indoor Scene Understanding”, et al 2020
- “Do Wide and Deep Networks Learn the Same Things? Uncovering How Neural Network Representations Vary With Width and Depth”, et al 2020
- “Guys and Dolls”, 2020
- “Open-Domain Question Answering Goes Conversational via Question Rewriting”, et al 2020
- “Digital Voicing of Silent Speech”, 2020
- “Rank-Smoothed Pairwise Learning In Perceptual Quality Assessment”, et al 2020
- “Implicit Gradient Regularization”, 2020
- “Large Associative Memory Problem in Neurobiology and Machine Learning”, 2020
- “AdapterHub: A Framework for Adapting Transformers”, et al 2020
- “Identifying Regulatory Elements via Deep Learning”, et al 2020
- “Is SGD a Bayesian Sampler? Well, Almost”, et al 2020
- “Bootstrap Your Own Latent (BYOL): A New Approach to Self-Supervised Learning”, et al 2020
- “SCAN: Learning to Classify Images without Labels”, et al 2020
- “Politeness Transfer: A Tag and Generate Approach”, et al 2020
- “Supervised Contrastive Learning”, et al 2020
- “Backpropagation and the Brain”, et al 2020
- “Can You Put It All Together: Evaluating Conversational Agents’ Ability to Blend Skills”, et al 2020
- “Topology of Deep Neural Networks”, et al 2020
- “Improved Baselines With Momentum Contrastive Learning”, et al 2020
- “The Large Learning Rate Phase of Deep Learning: the Catapult Mechanism”, et al 2020
- “Fast Differentiable Sorting and Ranking”, et al 2020
- “The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence”, 2020
- “Quantifying Independently Reproducible Machine Learning”, 2020
- “The Secret History of Facial Recognition: Sixty Years Ago, a Sharecropper’s Son Invented a Technology to Identify Faces. Then the Record of His Role All but Vanished. Who Was Woody Bledsoe, and Who Was He Working For?”, 2020
- “Can the Brain Do Backpropagation? -Exact Implementation of Backpropagation in Predictive Coding Networks”, et al 2020
- “Learning Neural Activations”, 2019
- “2019 AI Alignment Literature Review and Charity Comparison”, 2019
- “Libri-Light: A Benchmark for ASR With Limited or No Supervision”, et al 2019
- “Connecting Vision and Language With Localized Narratives”, Pont- et al 2019
- “12-In-1: Multi-Task Vision and Language Representation Learning”, et al 2019
- “A Deep Learning Framework for Neuroscience”, et al 2019
- “Machine Learning for Scent: Learning Generalizable Perceptual Representations of Small Molecules”, Sanchez- et al 2019
- “KuroNet: Pre-Modern Japanese Kuzushiji Character Recognition With Deep Learning”, et al 2019
- “Approximate Inference in Discrete Distributions With Monte Carlo Tree Search and Value Functions”, et al 2019
- “Best Practices for the Human Evaluation of Automatically Generated Text”, et al 2019
- “RandAugment: Practical Automated Data Augmentation With a Reduced Search Space”, et al 2019
- “Large-Scale Pretraining for Neural Machine Translation With Tens of Billions of Sentence Pairs”, et al 2019
- “ALBERT: A Lite BERT for Self-Supervised Learning of Language Representations”, et al 2019
- “Engineering a Less Artificial Intelligence”, et al 2019
- “Neural Networks Are a Priori Biased towards Boolean Functions With Low Entropy”, et al 2019
- “Simple, Scalable Adaptation for Neural Machine Translation”, et al 2019
- “Emergent Tool Use From Multi-Agent Autocurricula”, et al 2019
- “A Step Toward Quantifying Independently Reproducible Machine Learning Research”, 2019
- “Does Machine Translation Affect International Trade? Evidence from a Large Digital Platform”, et al 2019b
- “Can One Concurrently Record Electrical Spikes from Every Neuron in a Mammalian Brain?”, et al 2019
- “Massively Multilingual Neural Machine Translation in the Wild: Findings and Challenges”, et al 2019
- “Deep Set Prediction Networks”, et al 2019
- “Optimizing Color for Camouflage and Visibility Using Deep Learning: the Effects of the Environment and the Observer’s Visual System”, et al 2019
- “Speech2Face: Learning the Face Behind a Voice”, et al 2019
- “SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems”, et al 2019
- “Universal Quantum Control through Deep Reinforcement Learning”, et al 2019
- “Analysing Mathematical Reasoning Abilities of Neural Models”, et al 2019
- “Reinforcement Learning for Recommender Systems: A Case Study on Youtube”, 2019
- “Stochastic Optimization of Sorting Networks via Continuous Relaxations”, et al 2019
- “Surprises in High-Dimensional Ridgeless Least Squares Interpolation”, et al 2019
- “DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs”, et al 2019
- “Theories of Error Back-Propagation in the Brain”, 2019
- “A Replication Study: Machine Learning Models Are Capable of Predicting Sexual Orientation From Facial Images”, 2019
- “Unmasking Clever Hans Predictors and Assessing What Machines Really Learn”, et al 2019
- “What Makes a Good Conversation? How Controllable Attributes Affect Human Judgments”, et al 2019
- “The Evolved Transformer”, et al 2019
- “Forecasting Transformative AI: An Expert Survey”, et al 2019
- “Human Few-Shot Learning of Compositional Instructions”, et al 2019
- “Evaluation and Accurate Diagnoses of Pediatric Diseases Using Artificial Intelligence”, et al 2019
- “Why Is There No Successful Whole Brain Simulation (Yet)?”, 2019
- “High-Performance Medicine: the Convergence of Human and Artificial Intelligence”, 2019
- “Identifying Facial Phenotypes of Genetic Disorders Using Deep Learning”, et al 2019
- “Reinventing the Wheel: Discovering the Optimal Rolling Shape With PyTorch”, 2019
- “An Empirical Study of Example Forgetting during Deep Neural Network Learning”, et al 2018
- “CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge”, et al 2018
- “Depth With Nonlinearity Creates No Bad Local Minima in ResNets”, 2018
- “BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding”, et al 2018
- “Interpretable Textual Neuron Representations for NLP”, et al 2018
- “Searching for Efficient Multi-Scale Architectures for Dense Image Prediction”, et al 2018
- “Machine Learning to Predict Osteoporotic Fracture Risk from Genotypes”, et al 2018
- “Accelerated Reinforcement Learning for Sentence Generation by Vocabulary Prediction”, 2018
- “Searching Toward Pareto-Optimal Device-Aware Neural Architectures”, et al 2018
- “A Study of Reinforcement Learning for Neural Machine Translation”, et al 2018
- “Modeling Visual Context Is Key to Augmenting Object Detection Datasets”, et al 2018
- “Towards Automated Deep Learning: Efficient Joint Neural Architecture and Hyperparameter Search”, et al 2018
- “Automatically Composing Representation Transformations As a Means for Generalization”, et al 2018
- “Differentiable Learning-To-Normalize via Switchable Normalization”, et al 2018
- “On the Spectral Bias of Neural Networks”, et al 2018
- “Neural Tangent Kernel: Convergence and Generalization in Neural Networks”, et al 2018
- “Meta-Learning Transferable Active Learning Policies by Deep Reinforcement Learning”, et al 2018
- “Do CIFAR-10 Classifiers Generalize to CIFAR-10?”, et al 2018
- “Zero-Shot Dual Machine Translation”, et al 2018
- “Do Better ImageNet Models Transfer Better?”, et al 2018
- “GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding”, et al 2018
- “Adafactor: Adaptive Learning Rates With Sublinear Memory Cost”, 2018
- “Averaging Weights Leads to Wider Optima and Better Generalization”, et al 2018
- “SentEval: An Evaluation Toolkit for Universal Sentence Representations”, 2018
- “Think You Have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge”, et al 2018
- “Analyzing Uncertainty in Neural Machine Translation”, et al 2018
- “End-To-End Deep Image Reconstruction from Human Brain Activity”, et al 2018
- “Back to Basics: Benchmarking Canonical Evolution Strategies for Playing Atari”, et al 2018
- “SignSGD: Compressed Optimization for Non-Convex Problems”, et al 2018
- “Differentiable Dynamic Programming for Structured Prediction and Attention”, 2018
- “UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction”, et al 2018
- “Semantic Projection: Recovering Human Knowledge of Multiple, Distinct Object Features from Word Embeddings”, et al 2018
- “Panoptic Segmentation”, et al 2018
- “Clinically Applicable Deep Learning for Diagnosis and Referral in Retinal Disease”, et al 2018
- “Prediction of Cardiovascular Risk Factors from Retinal Fundus Photographs via Deep Learning”, et al 2018
- “Three-Dimensional Visualization and a Deep-Learning Model Reveal Complex Fungal Parasite Networks in Behaviorally Manipulated Ants”, et al 2017
- “Decoupled Weight Decay Regularization”, 2017
- “Automatic Differentiation in PyTorch”, et al 2017
- “Rethinking Generalization Requires Revisiting Old Ideas: Statistical Mechanics Approaches and Complex Learning Behavior”, 2017
- “Mixup: Beyond Empirical Risk Minimization”, et al 2017
- “Malware Detection by Eating a Whole EXE”, et al 2017
- “AlphaGo Zero: Mastering the Game of Go without Human Knowledge”, et al 2017
- “Swish: Searching for Activation Functions”, et al 2017
- “Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates”, 2017
- “Cut, Paste and Learn: Surprisingly Easy Synthesis for Instance Detection”, et al 2017
- “Emergence of Locomotion Behaviors in Rich Environments”, et al 2017
- “The Persistence and Transience of Memory”, 2017
- “Verb Physics: Relative Physical Knowledge of Actions and Objects”, 2017
- “Driver Identification Using Automobile Sensor Data from a Single Turn”, et al 2017
- “StreetStyle: Exploring World-Wide Clothing Styles from Millions of Photos”, et al 2017
- “Deep Voice 2: Multi-Speaker Neural Text-To-Speech”, et al 2017
- “WebVision Challenge: Visual Learning and Understanding With Web Data”, et al 2017
- “Inferring and Executing Programs for Visual Reasoning”, et al 2017
- “Visual Attribute Transfer through Deep Image Analogy”, et al 2017
- “On Weight Initialization in Deep Neural Networks”, 2017
- “A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference”, et al 2017
- “RACE: Large-Scale ReAding Comprehension Dataset From Examinations”, et al 2017
- “Data-Efficient Deep Reinforcement Learning for Dexterous Manipulation”, et al 2017
- “Prototypical Networks for Few-Shot Learning”, et al 2017
- “Meta Networks”, 2017
- “Understanding Synthetic Gradients and Decoupled Neural Interfaces”, et al 2017
- “Adaptive Neural Networks for Efficient Inference”, et al 2017
- “Deep Voice: Real-Time Neural Text-To-Speech”, et al 2017
- “Machine Learning Predicts Laboratory Earthquakes”, et al 2017
- “Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks”, et al 2017
- “Dermatologist-Level Classification of Skin Cancer With Deep Neural Networks”, et al 2017
- “Child Machines”, 2017
- “Machine Learning for Systems and Systems for Machine Learning”, 2017
- “Feedback Networks”, et al 2016
- “CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning”, et al 2016
- “Towards Information-Seeking Agents”, et al 2016
- “Spatially Adaptive Computation Time for Residual Networks”, et al 2016
- “Deep Learning Reinvents the Hearing Aid: Finally, Wearers of Hearing Aids Can Pick out a Voice in a Crowded Room”, 2016b
- “MS MARCO: A Human Generated MAchine Reading COmprehension Dataset”, et al 2016
- “Learning to Reinforcement Learn”, et al 2016
- “Lip Reading Sentences in the Wild”, et al 2016
- “Could a Neuroscientist Understand a Microprocessor?”, 2016
- “A Neural Network Playground”, 2016
- “Homotopy Analysis for Tensor PCA”, et al 2016
- “Why Does Deep and Cheap Learning Work so Well?”, et al 2016
- “SGDR: Stochastic Gradient Descent With Warm Restarts”, 2016
- “Concrete Problems in AI Safety”, et al 2016
- “SQuAD: 100,000+ Questions for Machine Comprehension of Text”, et al 2016
- “Matching Networks for One Shot Learning”, et al 2016
- “Convolutional Sketch Inversion”, et al 2016
- “Unifying Count-Based Exploration and Intrinsic Motivation”, et al 2016
- “Synthesizing the Preferred Inputs for Neurons in Neural Networks via Deep Generator Networks”, et al 2016
- “Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity”, et al 2016
- “”Why Should I Trust You?”: Explaining the Predictions of Any Classifier”, et al 2016
- “Mastering the Game of Go With Deep Neural Networks and Tree Search”, et al 2016
- “Learning to Compose Neural Networks for Question Answering”, et al 2016
- “How a Japanese Cucumber Farmer Is Using Deep Learning and TensorFlow”, 2016
- “Random Gradient-Free Minimization of Convex Functions”, 2015
- “Data-Dependent Initializations of Convolutional Neural Networks”, et al 2015
- “Online Batch Selection for Faster Training of Neural Networks”, 2015
- “Neural Module Networks”, et al 2015
- “Deep DPG (DDPG): Continuous Control With Deep Reinforcement Learning”, et al 2015
- “A Neural Algorithm of Artistic Style”, et al 2015
- “VQA: Visual Question Answering”, et al 2015
- “Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks”, et al 2015
- “Probabilistic Line Searches for Stochastic Optimization”, 2015
- “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification”, et al 2015
- Neural Networks and Deep Learning, 2015
- “Neural Networks and Deep Learning § Ch6 Deep Learning”, 2015
- “Qualitatively Characterizing Neural Network Optimization Problems”, et al 2014
- “Freeze-Thaw Bayesian Optimization”, et al 2014
- “Microsoft COCO: Common Objects in Context”, et al 2014
- “Deep Learning in Neural Networks: An Overview”, 2014
- “Neural Networks, Manifolds, and Topology”, 2014
- “Exact Solutions to the Nonlinear Dynamics of Learning in Deep Linear Neural Networks”, et al 2013
- “Distributed Representations of Words and Phrases and Their Compositionality”, et al 2013
- “Whatever Next? Predictive Brains, Situated Agents, and the Future of Cognitive Science”, 2013
- “Deep Gaussian Processes”, 2012
- “Artist Agent: A Reinforcement Learning Approach to Automatic Stroke Generation in Oriental Ink Painting”, et al 2012
- “HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent”, et al 2011
- “Large-Scale Deep Unsupervised Learning Using Graphics Processors”, et al 2009
- “A Free Energy Principle for the Brain”, et al 2006
- “Understanding the Nature of the General Factor of Intelligence: The Role of Individual Differences in Neural Plasticity As an Explanatory Mechanism”, 2002
- “Starfish § Bulrushes”, 1999
- “Exponentiated Gradient versus Gradient Descent for Linear Predictors”, 1997
- “Optimality in Biological and Artificial Networks?”, 1997
- “A Sociological Study of the Official History of the Perceptrons Controversy”, 1996
- “Turing Patterns in CNNs, I: Once over Lightly”, et al 1995
- “Learning and Generalization in a Two-Layer Neural Network: The Role of the Vapnik-Chervonvenkis Dimension”, 1994
- “A Sociological Study of the Official History of the Perceptrons Controversy [199331ya]”, 1993
- “The Statistical Mechanics of Learning a Rule”, et al 1993
- “On Learning the Past Tenses of English Verbs”, Rumelhart & 1993
- “Statistical Mechanics of Learning from Examples”, et al 1992
- “Memorization Without Generalization in a Multilayered Neural Network”, et al 1992
- “Symbolic and Neural Learning Algorithms: An Experimental Comparison”, et al 1991
- “Backpropagation Learning For Multilayer Feed-Forward Neural Networks Using The Conjugate Gradient Method”, et al 1991
- “Artificial Neural Networks, Back Propagation, and the Kelley-Bryson Gradient Procedure”, 1990
- “Exhaustive Learning”, et al 1990
- “International Joint Conference on Neural Networks, January 15–19, 1990: Volume 1: Theory Track, Neural and Cognitive Sciences Track”, 1990
- “International Joint Conference on Neural Networks, January 15–19, 1990: Volume 2: Applications Track”, 1990
- “Explanatory Coherence”, 1989
- “Parallel Distributed Processing: Implications for Cognition and Development”, 1989
- “Cellular Neural Networks: Theory”, 1988b
- “Cellular Neural Networks: Applications”, 1988
- “The Brain As Template”, 1988
- “Observation of Phase Transitions in Spreading Activation Networks”, et al 1987
- “Learning Representations by Backpropagating Errors”, et al 1986b
- “Storing Infinite Numbers of Patterns in a Spin-Glass Model of Neural Networks”, et al 1985
- “Toward An Interactive Model Of Reading”, 1985
- “Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences”, 1974
- “Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms”, 1962
- “Speculations on Perceptrons and Other Automata”, 1959
- “Pandemonium: A Paradigm for Learning”, 1959
- “Some AI Koans § Http://www.catb.org/esr/jargon/html/koans.html#id3141241”, 2024
- “Some AI Koans”, 2024
- “The Age of Em, A Book”, 2024
- “
gsutil Config: Obtain Credentials and Create Configuration File”, 2024- “Why Momentum Really Works”
- “Code for Reproducing Results in “Glow: Generative Flow With Invertible 1×1 Convolutions””
- “Differentiable Finite State Machines”
- “About Sam Greydanus”, 2024
- “Contrastive Representation Learning”
- “The Internet’s AI Slop Problem Is Only Going to Get Worse”
- “Glow: Better Reversible Generative Models”
- “Differentiable Programming from Scratch”
- “Deep Reinforcement Learning Doesn’t Work Yet”
- “[Commonsense Media Survey on US Generative Media Use]”
- “Gourmand Cat Fence”
- “Simple versus Short: Higher-Order Degeneracy and Error-Correction”
- “Inferring Neural Activity Before Plasticity As a Foundation for Learning beyond Backpropagation”
- “Reddit: Reinforcement Learning Subreddit”, 2024
- “AI and the Indian Election”, 2024
- “Lip Reading Sentences in the Wild [Video]”
- Wikipedia
- Miscellaneous
- Bibliography