“‘CNN’ Tag”,2019-09-10
![]()
Bibliography for tag
ai/nn/cnn, most recent first: 4 related tags, 376 annotations, & 29 links (parent).
- See Also
- Gwern
- Links
- “Convolutional Differentiable Logic Gate Networks”, et al 2024
- “MaskBit: Embedding-Free Image Generation via Bit Tokens”, et al 2024
- “Quantum Convolutional Neural Networks Are (Effectively) Classically Simulable”, et al 2024
- “Three-Dimension Animation Character Design Based on Probability Genetic Algorithm”, 2024
- “Investigating Learning-Independent Abstract Reasoning in Artificial Neural Networks”, 2024
- “Grokfast: Accelerated Grokking by Amplifying Slow Gradients”, et al 2024
- “A Rotation and a Translation Suffice: Fooling CNNs With Simple Transformations”, et al 2024
- “Neural Networks Learn Statistics of Increasing Complexity”, et al 2024
- “Machine Learning Reveals the Control Mechanics of an Insect Wing Hinge”, et al 2024
- “Supplementary Materials for Grounded Language Acquisition through the Eyes and Ears of a Single Child”, 2024
- “Grounded Language Acquisition through the Eyes and Ears of a Single Child”, et al 2024
- “Machine Learning As a Tool for Hypothesis Generation”, 2024
- “Multi Visual Feature Fusion Based Fog Visibility Estimation for Expressway Surveillance Using Deep Learning Network”, et al 2023
- “Auditing the Inference Processes of Medical-Image Classifiers by Leveraging Generative AI and the Expertise of Physicians”, et al 2023
- “Development of Deep Ensembles to Screen for Autism and Symptom Severity Using Retinal Photographs”, et al 2023
- “May the Noise Be With You: Adversarial Training without Adversarial Examples”, et al 2023
- “Are Vision Transformers More Data Hungry Than Newborn Visual Systems?”, et al 2023
- “UniRepLKNet: A Universal Perception Large-Kernel ConvNet for Audio, Video, Point Cloud, Time-Series and Image Recognition”, et al 2023
- “The Possibility of Making $138,000 from Shredded Banknote Pieces Using Computer Vision”, 2023
- “ConvNets Match Vision Transformers at Scale”, et al 2023
- “Interpret Vision Transformers As ConvNets With Dynamic Convolutions”, et al 2023
- “Artificial Intelligence-Supported Screen Reading versus Standard Double Reading in the Mammography Screening With Artificial Intelligence Trial (MASAI): a Clinical Safety Analysis of a Randomised, Controlled, Non-Inferiority, Single-Blinded, Screening Accuracy Study”, et al 2023
- “Hand-Drawn Anime Line Drawing Colorization of Faces With Texture Details”, et al 2023
- “High-Quality Synthetic Character Image Extraction via Distortion Recognition”, et al 2023
- “Loss of Plasticity in Deep Continual Learning (Continual Backpropagation)”, et al 2023
- “Neural Networks Trained With SGD Learn Distributions of Increasing Complexity”, et al 2023
- “Rosetta Neurons: Mining the Common Units in a Model Zoo”, et al 2023
- “Improving Neural Network Representations Using Human Similarity Judgments”, et al 2023
- “U-Net CNN in APL: Exploring Zero-Framework, Zero-Library Machine Learning”, Hsu & 2023
- “VanillaNet: the Power of Minimalism in Deep Learning”, et al 2023
- “Multi-Label Classification in Anime Illustrations Based on Hierarchical Attribute Relationships”, et al 2023
- “ImageNet-Hard: The Hardest Images Remaining from a Study of the Power of Zoom and Spatial Biases in Image Classification”, et al 2023
- “Hierarchical Multi-Label Attribute Classification With Graph Convolutional Networks on Anime Illustration”, et al 2023
- “Loss Landscapes Are All You Need: Neural Network Generalization Can Be Explained Without the Implicit Bias of Gradient Descent”, et al 2023
- “Adding Conditional Control to Text-To-Image Diffusion Models”, et al 2023
- “Pruning Compact ConvNets for Efficient Inference”, et al 2023
- “Does Progress on ImageNet Transfer to Real-World Datasets?”, et al 2023
- “EarSpy: Spying Caller Speech and Identity through Tiny Vibrations of Smartphone Ear Speakers”, et al 2022
- “Pretraining Without Attention”, et al 2022
- “What Do Vision Transformers Learn? A Visual Exploration”, et al 2022
- “A 64-Core Mixed-Signal In-Memory Compute Chip Based on Phase-Change Memory for Deep Neural Network Inference”, et al 2022
- “Simulated Automated Facial Recognition Systems As Decision-Aids in Forensic Face Matching Tasks”, 2022
- “Interpreting Neural Networks through the Polytope Lens”, et al 2022
- “Predicting Sex, Age, General Cognition and Mental Health With Machine Learning on Brain Structural Connectomes”, et al 2022
- “The Power of Ensembles for Active Learning in Image Classification”, et al 2022
- “GCN-Based Multi-Modal Multi-Label Attribute Classification in Anime Illustration Using Domain-Specific Semantic Features”, et al 2022
- “The Unreasonable Effectiveness of Fully-Connected Layers for Low-Data Regimes”, et al 2022
- “Understanding the Covariance Structure of Convolutional Filters”, et al 2022
- “VICRegL: Self-Supervised Learning of Local Visual Features”, et al 2022
- “Omnigrok: Grokking Beyond Algorithmic Data”, et al 2022
- “
g.pt: Learning to Learn With Generative Models of Neural Network Checkpoints”, et al 2022- “FastSiam: Resource-Efficient Self-Supervised Learning on a Single GPU”, et al 2022
- “Evaluation of Transfer Learning Methods for Detecting Alzheimer’s Disease With Brain MRI”, et al 2022
- “Reassessing Hierarchical Correspondences between Brain and Deep Networks through Direct Interface”, 2022
- “Timesweeper: Accurately Identifying Selective Sweeps Using Population Genomic Time Series”, 2022
- “RHO-LOSS: Prioritized Training on Points That Are Learnable, Worth Learning, and Not Yet Learnt”, et al 2022
- “BigVGAN: A Universal Neural Vocoder With Large-Scale Training”, et al 2022
- “Studying Growth With Neural Cellular Automata”, 2022
- “Continual Pre-Training Mitigates Forgetting in Language and Vision”, et al 2022
- “Scaling Up Your Kernels to 31×31: Revisiting Large Kernel Design in CNNs (RepLKNet)”, et al 2022
- “Democratizing Contrastive Language-Image Pre-Training: A CLIP Benchmark of Data, Model, and Supervision”, et al 2022
- “Variational Autoencoders Without the Variation”, et al 2022
- “On the Effectiveness of Dataset Watermarking in Adversarial Settings”, 2022
- “General Cyclical Training of Neural Networks”, 2022
- “Approximating CNNs With Bag-Of-Local-Features Models Works Surprisingly Well on ImageNet”, et al 2022
- “Variational Neural Cellular Automata”, et al 2022
- “ConvMixer: Patches Are All You Need?”, 2022
- “HyperTransformer: Model Generation for Supervised and Semi-Supervised Few-Shot Learning”, et al 2022
- “ConvNeXt: A ConvNet for the 2020s”, et al 2022
- “An Empirical Investigation of the Role of Pre-Training in Lifelong Learning”, et al 2021
- “Noether Networks: Meta-Learning Useful Conserved Quantities”, et al 2021
- “AugMax: Adversarial Composition of Random Augmentations for Robust Training”, et al 2021
- “The Efficiency Misnomer”, et al 2021
- “Logical Activation Functions: Logit-Space Equivalents of Probabilistic Boolean Operators”, et al 2021
- “Evaluating Loss Functions for Illustration Super-Resolution Neural Networks”, 2021
- “TWIST: Self-Supervised Learning by Estimating Twin Class Distributions”, et al 2021
- “Deep Learning Models of Cognitive Processes Constrained by Human Brain Connectomes”, et al 2021
- “Decoupled Contrastive Learning”, et al 2021
- “Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-Training Paradigm (DeCLIP)”, et al 2021
- “Mining for Strong Gravitational Lenses With Self-Supervised Learning”, et al 2021
- “
THINGSvision: A Python Toolbox for Streamlining the Extraction of Activations From Deep Neural Networks”, 2021- “A Battle of Network Structures: An Empirical Study of CNN, Transformer, and MLP”, et al 2021
- “Predicting Phenotypes from Genetic, Environment, Management, and Historical Data Using CNNs”, et al 2021
- “Do Vision Transformers See Like Convolutional Neural Networks?”, et al 2021
- “Dataset Distillation With Infinitely Wide Convolutional Networks”, et al 2021
- “Neuroprosthesis for Decoding Speech in a Paralyzed Person With Anarthria”, et al 2021
- “Graph Jigsaw Learning for Cartoon Face Recognition”, et al 2021
- “Prediction Depth: Deep Learning Through the Lens of Example Difficulty”, et al 2021
- “Revisiting the Calibration of Modern Neural Networks”, et al 2021
- “Partial Success in Closing the Gap between Human and Machine Vision”, et al 2021
- “CoAtNet: Marrying Convolution and Attention for All Data Sizes”, et al 2021
- “Effect of Pre-Training Scale on Intra/Inter-Domain Full and Few-Shot Transfer Learning for Natural and Medical X-Ray Chest Images”, 2021
- “Embracing New Techniques in Deep Learning for Estimating Image Memorability”, 2021
- “Predicting Sex from Retinal Fundus Photographs Using Automated Deep Learning”, et al 2021
- “Rethinking and Improving the Robustness of Image Style Transfer”, et al 2021
- “Rip Van Winkle’s Razor, a Simple New Estimate for Adaptive Data Analysis”, 2021
- “The Surprising Impact of Mask-Head Architecture on Novel Class Segmentation”, et al 2021
- “Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks”, et al 2021
- “ConViT: Improving Vision Transformers With Soft Convolutional Inductive Biases”, d’ et al 2021
- “Learning from Videos to Understand the World”, et al 2021
- “Fast and Accurate Model Scaling”, et al 2021
- “Transfer of Fully Convolutional Policy-Value Networks Between Games and Game Variants”, et al 2021
- “Momentum Residual Neural Networks”, et al 2021
- “Hiding Data Hiding”, et al 2021
- “Explaining Neural Scaling Laws”, et al 2021
- “NFNet: High-Performance Large-Scale Image Recognition Without Normalization”, et al 2021
- “Brain2Pix: Fully Convolutional Naturalistic Video Reconstruction from Brain Activity”, et al 2021
- “Words As a Window: Using Word Embeddings to Explore the Learned Representations of Convolutional Neural Networks”, et al 2021
- “E(3)-Equivariant Graph Neural Networks for Data-Efficient and Accurate Interatomic Potentials”, et al 2021
- “Meta Pseudo Labels”, et al 2021
- “Is MLP-Mixer a CNN in Disguise? As Part of This Blog Post, We Look at the MLP Mixer Architecture in Detail and Also Understand Why It Is Not Considered Convolution Free.”
- “Converting Tabular Data into Images for Deep Learning With Convolutional Neural Networks”, et al 2021
- “Taming Transformers for High-Resolution Image Synthesis”, et al 2020
- “Ensemble Learning of Convolutional Neural Network, Support Vector Machine, and Best Linear Unbiased Predictor for Brain Age Prediction: ARAMIS Contribution to the Predictive Analytics Competition 2019 Challenge”, Couvy- et al 2020
- “Scaling down Deep Learning”, 2020
- “Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images”, 2020
- “Understanding RL Vision: With Diverse Environments, We Can Analyze, Diagnose and Edit Deep Reinforcement Learning Models Using Attribution”, et al 2020
- “Fourier Neural Operator for Parametric Partial Differential Equations”, et al 2020
- “Deep Learning-Based Classification of the Polar Emotions of ‘Moe’-Style Cartoon Pictures”, et al 2020b
- “Sharpness-Aware Minimization (SAM) for Efficiently Improving Generalization”, et al 2020
- “Demonstrating That Dataset Domains Are Largely Linearly Separable in the Feature Space of Common CNNs”, 2020
- “Optimal Peanut Butter and Banana Sandwiches”, 2020
- “Accuracy and Performance Comparison of Video Action Recognition Approaches”, et al 2020
- “A Digital Biomarker of Diabetes from Smartphone-Based Vascular Signals”, et al 2020
- “SSD vs. YOLO for Detection of Outdoor Urban Advertising Panels under Multiple Variabilities”, et al 2020
- “On Robustness and Transferability of Convolutional Neural Networks”, et al 2020
- “NVAE: A Deep Hierarchical Variational Autoencoder”, 2020
- “CoCoNuT: Combining Context-Aware Neural Translation Models Using Ensemble for Program Repair”, et al 2020
- “The Many Faces of Robustness: A Critical Analysis of Out-Of-Distribution Generalization”, et al 2020
- “SimCLRv2: Big Self-Supervised Models Are Strong Semi-Supervised Learners”, et al 2020
- “FBNetV3: Joint Architecture-Recipe Search Using Predictor Pretraining”, et al 2020
- “Danny Hernandez on Forecasting and the Drivers of AI Progress”, et al 2020
- “Measuring the Algorithmic Efficiency of Neural Networks”, 2020
- “AI and Efficiency: We’re Releasing an Analysis Showing That Since 2012 the Amount of Compute Needed to Train a Neural Net to the Same Performance on ImageNet Classification Has Been Decreasing by a Factor of 2 Every 16 Months”, 2020
- “Reinforcement Learning With Augmented Data”, et al 2020
- “YOLOv4: Optimal Speed and Accuracy of Object Detection”, et al 2020
- “Scaling Laws from the Data Manifold Dimension”, 2020
- “Shortcut Learning in Deep Neural Networks”, et al 2020
- “Evolving Normalization-Activation Layers”, et al 2020
- “Conditional Convolutions for Instance Segmentation”, et al 2020
- “Train-By-Reconnect: Decoupling Locations of Weights from Their Values (LaPerm)”, 2020
- “Rethinking Parameter Counting in Deep Models: Effective Dimensionality Revisited”, et al 2020
- “Do We Need Zero Training Loss After Achieving Zero Training Error?”, et al 2020
- “Bayesian Deep Learning and a Probabilistic Perspective of Generalization”, 2020
- “A Simple Framework for Contrastive Learning of Visual Representations”, et al 2020
- “Growing Neural Cellular Automata: Differentiable Model of Morphogenesis”, et al 2020
- “Accelerating Feedforward Computation via Parallel Nonlinear Equation Solving”, et al 2020
- “First-In-Human Evaluation of a Hand-Held Automated Venipuncture Device for Rapid Venous Blood Draws”, et al 2020
- “ImageNet-A: Natural Adversarial Examples”, et al 2020
- “Deep-Eyes: Fully Automatic Anime Character Colorization With Painting of Details on Empty Pupils”, et al 2020
- “CARN: Convolutional Anchored Regression Network for Fast and Accurate Single Image Super-Resolution”, 2020
- “The Importance of Deconstruction”, 2020
- “Big Transfer (BiT): General Visual Representation Learning”, et al 2019
- “Linear Mode Connectivity and the Lottery Ticket Hypothesis”, et al 2019
- “Dynamic Convolution: Attention over Convolution Kernels”, et al 2019
- “Deep Double Descent: We Show That the Double Descent Phenomenon Occurs in CNNs, ResNets, and Transformers: Performance First Improves, Then Gets Worse, and Then Improves Again With Increasing Model Size, Data Size, or Training Time”, et al 2019
- “Fantastic Generalization Measures and Where to Find Them”, et al 2019
- “Anonymous Market Product Classification Based on Deep Learning”, et al 2019b
- “The Origins and Prevalence of Texture Bias in Convolutional Neural Networks”, et al 2019
- “How Machine Learning Can Help Unlock the World of Ancient Japan”, 2019
- “SimpleShot: Revisiting Nearest-Neighbor Classification for Few-Shot Learning”, et al 2019
- “Self-Training With Noisy Student Improves ImageNet Classification”, et al 2019
- “Taxonomy of Real Faults in Deep Learning Systems”, et al 2019
- “On the Measure of Intelligence”, 2019
- “DD-PPO: Learning Near-Perfect PointGoal Navigators from 2.5 Billion Frames”, et al 2019
- “Accelerating Deep Learning by Focusing on the Biggest Losers”, et al 2019
- “ANIL: Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML”, et al 2019
- “ObjectNet: A Large-Scale Bias-Controlled Dataset for Pushing the Limits of Object Recognition Models”, et al 2019
- “CAR: Learned Image Downscaling for Upscaling Using Content Adaptive Resampler”, 2019
- “Finding the Needle in the Haystack With Convolutions: on the Benefits of Architectural Bias”, d’ et al 2019
- “Intriguing Properties of Adversarial Training at Scale”, 2019
- “Adversarial Robustness As a Prior for Learned Representations”, et al 2019
- “Human-Level Performance in 3D Multiplayer Games With Population-Based Reinforcement Learning”, et al 2019
- “ImageNet-Sketch: Learning Robust Global Representations by Penalizing Local Predictive Power”, et al 2019
- “Cold Case: The Lost MNIST Digits”, 2019
- “Improved Object Recognition Using Neural Networks Trained to Mimic the Brain’s Statistical Properties”, et al 2019
- “Neural System Identification With Neural Information Flow”, et al 2019
- “Percival: Making In-Browser Perceptual Ad Blocking Practical With Deep Learning”, et al 2019
- “CutMix: Regularization Strategy to Train Strong Classifiers With Localizable Features”, et al 2019
- “Adversarial Examples Are Not Bugs, They Are Features”, et al 2019
- “Searching for MobileNetV3”, et al 2019
- “Billion-Scale Semi-Supervised Learning for Image Classification”, et al 2019
- “A Recipe for Training Neural Networks”, 2019
- “NAS-FPN: Learning Scalable Feature Pyramid Architecture for Object Detection”, et al 2019
- “COCO-GAN: Generation by Parts via Conditional Coordinating”, et al 2019
- “Benchmarking Neural Network Robustness to Common Corruptions and Perturbations”, 2019
- “Semantic Image Synthesis With Spatially-Adaptive Normalization”, et al 2019
- “The Bitter Lesson”, 2019
- “Learning To Follow Directions in Street View”, et al 2019
- “SuperTML: Two-Dimensional Word Embedding for the Precognition on Structured Tabular Data”, et al 2019
- “Real-Time Continuous Transcription With Live Transcribe”, 2019
- “Do We Train on Test Data? Purging CIFAR of Near-Duplicates”, 2019
- “Pay Less Attention With Lightweight and Dynamic Convolutions”, et al 2019
- “Association Between Surgical Skin Markings in Dermoscopic Images and Diagnostic Performance of a Deep Learning Convolutional Neural Network for Melanoma Recognition”, et al 2019
- “Detecting Advertising on Building Façades With Computer Vision”, 2019
- “On Lazy Training in Differentiable Programming”, et al 2018
- “Quantifying Generalization in Reinforcement Learning”, et al 2018
- “ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware”, et al 2018
- “ImageNet-Trained CNNs Are Biased towards Texture; Increasing Shape Bias Improves Accuracy and Robustness”, et al 2018
- “Evolving Space-Time Neural Architectures for Videos”, et al 2018
- “ADNet: A Deep Network for Detecting Adverts”, et al 2018
- “AdVersarial: Perceptual Ad Blocking Meets Adversarial Machine Learning”, et al 2018
- “FloWaveNet: A Generative Flow for Raw Audio”, et al 2018
- “StreetNet: Preference Learning With Convolutional Neural Network on Urban Crime Perception”, et al 2018
- “Towards Understanding Learning Representations: To What Extent Do Different Neural Networks Learn the Same Representation”, et al 2018
- “Understanding and Correcting Pathologies in the Training of Learned Optimizers”, et al 2018
- “Graph Convolutional Reinforcement Learning”, et al 2018
- “Cellular Automata As Convolutional Neural Networks”, 2018
- “Don’t Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization”, et al 2018
- “Human-Like Playtesting With Deep Learning”, et al 2018
- “CurriculumNet: Weakly Supervised Learning from Large-Scale Web Images”, et al 2018
- “MnasNet: Platform-Aware Neural Architecture Search for Mobile”, et al 2018
- “LEO: Meta-Learning With Latent Embedding Optimization”, et al 2018
- “Glow: Generative Flow With Invertible 1×1 Convolutions”, 2018
- “The Goldilocks Zone: Towards Better Understanding of Neural Network Loss Landscapes”, 2018
- “Benchmarking Neural Network Robustness to Common Corruptions and Surface Variations”, 2018
- “Confounding Variables Can Degrade Generalization Performance of Radiological Deep Learning Models”, et al 2018
- “Faster SGD Training by Minibatch Persistency”, et al 2018
- “Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks”, et al 2018
- “Resource-Efficient Neural Architect”, et al 2018
- “More Than a Feeling: Learning to Grasp and Regrasp Using Vision and Touch”, et al 2018
- “Deep Learning Generalizes Because the Parameter-Function Map Is Biased towards Simple Functions”, Valle- et al 2018
- “Bidirectional Learning for Robust Neural Networks”, Pontes-2018
- “BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning”, et al 2018
- “Self-Distillation: Born Again Neural Networks”, et al 2018
- “Tile2Vec: Unsupervised Representation Learning for Spatially Distributed Data”, et al 2018
- “Exploring the Limits of Weakly Supervised Pretraining”, et al 2018
- “YOLOv3: An Incremental Improvement”, 2018
- “Reptile: On First-Order Meta-Learning Algorithms”, et al 2018
- “Essentially No Barriers in Neural Network Energy Landscape”, et al 2018
- “Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs”, et al 2018
- “Guess, Check and Fix: a Phenomenology of Improvisation in ‘Neural’ Painting”, 2018
- “Sim-To-Real Optimization of Complex Real World Mobile Network With Imperfect Information via Deep Reinforcement Learning from Self-Play”, et al 2018
- “Evolved Policy Gradients”, et al 2018
- “Large-Scale, High-Resolution Comparison of the Core Visual Object Recognition Behavior of Humans, Monkeys, and State-Of-The-Art Deep Artificial Neural Networks”, et al 2018
- “IMPALA: Scalable Distributed Deep-RL With Importance Weighted Actor-Learner Architectures”, et al 2018
- “Active, Continual Fine Tuning of Convolutional Neural Networks for Reducing Annotation Efforts”, et al 2018
- “ArcFace: Additive Angular Margin Loss for Deep Face Recognition”, et al 2018
- “Man against Machine: Diagnostic Performance of a Deep Learning Convolutional Neural Network for Dermoscopic Melanoma Recognition in Comparison to 58 Dermatologists”, et al 2018
- “DeepGS: Predicting Phenotypes from Genotypes Using Deep Learning”, et al 2017
- “Deep Image Reconstruction from Human Brain Activity”, et al 2017
- “Visualizing the Loss Landscape of Neural Nets”, et al 2017
- “SPP-Net: Deep Absolute Pose Regression With Synthetic Views”, et al 2017
- “China’s AI Advances Help Its Tech Industry, and State Security”, 2017
- “Measuring the Tendency of CNNs to Learn Surface Statistical Regularities”, 2017
- “3D Semantic Segmentation With Submanifold Sparse Convolutional Networks”, et al 2017
- “BlockDrop: Dynamic Inference Paths in Residual Networks”, et al 2017
- “Knowledge Concentration: Learning 100K Object Classifiers in a Single CNN”, et al 2017
- “The Signature of Robot Action Success in EEG Signals of a Human Observer: Decoding and Visualization Using Deep Convolutional Neural Networks”, et al 2017
- “11K Hands: Gender Recognition and Biometric Identification Using a Large Dataset of Hand Images”, 2017
- “Learning to Play Chess With Minimal Lookahead and Deep Value Neural Networks”, 2017 (page 3)
- “Learning to Generalize: Meta-Learning for Domain Generalization”, et al 2017
- “High-Precision Automated Reconstruction of Neurons With Flood-Filling Networks”, et al 2017
- “Efficient K-Shot Learning With Regularized Deep Networks”, et al 2017
- “NIMA: Neural Image Assessment”, 2017
- “Squeeze-And-Excitation Networks”, et al 2017
- “What Does a Convolutional Neural Network Recognize in the Moon?”, 2017
- “SMASH: One-Shot Model Architecture Search through HyperNetworks”, et al 2017
- “BitNet: Bit-Regularized Deep Neural Networks”, et al 2017
- “A Deep Architecture for Unified Esthetic Prediction”, 2017
- “Learning With Rethinking: Recurrently Improving Convolutional Neural Networks through Feedback”, et al 2017
- “WebVision Database: Visual Learning and Understanding from Web Data”, et al 2017
- “Focal Loss for Dense Object Detection”, et al 2017
- “Active Learning for Convolutional Neural Networks: A Core-Set Approach”, 2017
- “Learning to Infer Graphics Programs from Hand-Drawn Images”, et al 2017
- “A Downsampled Variant of ImageNet As an Alternative to the CIFAR Datasets”, et al 2017
- “Learning Transferable Architectures for Scalable Image Recognition”, et al 2017
- “Efficient Architecture Search by Network Transformation”, et al 2017
- “A Simple Neural Attentive Meta-Learner”, et al 2017
- “Towards Deep Learning Models Resistant to Adversarial Attacks”, et al 2017
- “Gradient Diversity: a Key Ingredient for Scalable Distributed Learning”, et al 2017
- “Device Placement Optimization With Reinforcement Learning”, et al 2017
- “Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour”, et al 2017
- “Submanifold Sparse Convolutional Networks”, 2017
- “A Simple Neural Network Module for Relational Reasoning”, et al 2017
- “Deep Learning Is Robust to Massive Label Noise”, et al 2017
- “What Makes a Good Image? Airbnb Demand Analytics Leveraging Interpretable Image Features”, et al 2017
- “Picasso: A Modular Framework for Visualizing the Learning Process of Neural Network Image Classifiers”, 2017
- “BAM! The Behance Artistic Media Dataset for Recognition Beyond Photography”, et al 2017
- “Adversarial Neural Machine Translation”, et al 2017
- “Multi-Scale Dense Networks for Resource Efficient Image Classification”, et al 2017
- “Scaling the Scattering Transform: Deep Hybrid Networks”, et al 2017
- “Mask R-CNN”, et al 2017
- “Using Human Brain Activity to Guide Machine Learning”, et al 2017
- “Learned Optimizers That Scale and Generalize”, et al 2017
- “Prediction and Control With Temporal Segment Models”, et al 2017
- “Parallel Multiscale Autoregressive Density Estimation”, et al 2017
- “Convolution Aware Initialization”, 2017
- “Gender-From-Iris or Gender-From-Mascara?”, et al 2017
- “BrainNetCNN: Convolutional Neural Networks for Brain Networks; towards Predicting Neurodevelopment”, et al 2017
- “Universal Representations: The Missing Link between Faces, Text, Planktons, and Cat Breeds”, 2017
- “PixelCNN++: Improving the PixelCNN With Discretized Logistic Mixture Likelihood and Other Modifications”, et al 2017
- “YOLO9000: Better, Faster, Stronger”, 2016
- “Language Modeling With Gated Convolutional Networks”, et al 2016
- “Learning from Simulated and Unsupervised Images through Adversarial Training”, et al 2016
- “LipNet: End-To-End Sentence-Level Lipreading”, et al 2016
- “Feature Pyramid Networks for Object Detection”, et al 2016
- “Self-Critical Sequence Training for Image Captioning”, et al 2016
- “ResNeXt: Aggregated Residual Transformations for Deep Neural Networks”, et al 2016
- “Responses to Critiques on Machine Learning of Criminality Perceptions (Addendum of ArXiv:1611413ya.04135)”, 2016
- “Understanding Deep Learning Requires Rethinking Generalization”, et al 2016
- “Designing Neural Network Architectures Using Reinforcement Learning”, et al 2016
- “VPN: Video Pixel Networks”, et al 2016
- “HyperNetworks”, et al 2016
- “Neural Photo Editing With Introspective Adversarial Networks”, et al 2016
- “On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima”, et al 2016
- “WaveNet: A Generative Model for Raw Audio”, et al 2016
- “Direct Feedback Alignment Provides Learning in Deep Neural Networks”, 2016
- “Deep Learning Human Mind for Automated Visual Classification”, et al 2016
- “Temporal Convolutional Networks: A Unified Approach to Action Segmentation”, et al 2016
- “DenseNet: Densely Connected Convolutional Networks”, et al 2016
- “Clockwork Convnets for Video Semantic Segmentation”, et al 2016
- “Deep Learning the City: Quantifying Urban Perception At A Global Scale”, et al 2016
- “Convolutional Neural Fabrics”, 2016
- “Deep Neural Networks Are Robust to Weight Binarization and Other Non-Linear Distortions”, et al 2016
- “DeepLab: Semantic Image Segmentation With Deep Convolutional Nets, Atrous Convolution (ASPP), and Fully Connected CRFs”, et al 2016
- “FractalNet: Ultra-Deep Neural Networks without Residuals”, et al 2016
- “Wide Residual Networks”, 2016
- “Residual Networks Behave Like Ensembles of Relatively Shallow Networks”, et al 2016
- “Neural Autoregressive Distribution Estimation”, et al 2016
- “ViZDoom: A Doom-Based AI Research Platform for Visual Reinforcement Learning”, et al 2016
- “OHEM: Training Region-Based Object Detectors With Online Hard Example Mining”, et al 2016
- “Deep Networks With Stochastic Depth”, et al 2016
- “Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing”, et al 2016
- “Do Deep Convolutional Nets Really Need to Be Deep and Convolutional?”, et al 2016
- “XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks”, et al 2016
- “Learning Hand-Eye Coordination for Robotic Grasping With Deep Learning and Large-Scale Data Collection”, et al 2016
- “Network Morphism”, et al 2016
- “Inception-V4, Inception-ResNet and the Impact of Residual Connections on Learning”, et al 2016
- “PlaNet—Photo Geolocation With Convolutional Neural Networks”, et al 2016
- “Value Iteration Networks”, et al 2016
- “PixelRNN: Pixel Recurrent Neural Networks”, et al 2016
- “Image Synthesis from Yahoo’s
open_nsfw”, 2016- “Deep Residual Learning for Image Recognition”, et al 2015
- “Microsoft Researchers Win ImageNet Computer Vision Challenge”, 2015
- “Adding Gradient Noise Improves Learning for Very Deep Networks”, et al 2015
- “The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition”, et al 2015
- “Learning Visual Features from Large Weakly Supervised Data”, et al 2015
- “
Illustration2Vec: a Semantic Vector Representation of Illustrations”, 2015- “BinaryConnect: Training Deep Neural Networks With Binary Weights during Propagations”, et al 2015
- “Predicting and Understanding Urban Perception With Convolutional Neural Networks”, et al 2015
- “A Neural Attention Model for Abstractive Sentence Summarization”, et al 2015
- “LSUN: Construction of a Large-Scale Image Dataset Using Deep Learning With Humans in the Loop”, et al 2015
- “You Only Look Once: Unified, Real-Time Object Detection”, et al 2015
- “Clothing-1M: Learning from Massive Noisy Labeled Data for Image Classification”, et al 2015
- “Dropout As a Bayesian Approximation: Representing Model Uncertainty in Deep Learning”, 2015
- “STN: Spatial Transformer Networks”, et al 2015
- “Faster R-CNN: Towards Real-Time Object Detection With Region Proposal Networks”, et al 2015
- “Cyclical Learning Rates for Training Neural Networks”, 2015
- “Deep Learning”, et al 2015
- “Fast R-CNN”, 2015
- “End-To-End Training of Deep Visuomotor Policies”, et al 2015
- “FaceNet: A Unified Embedding for Face Recognition and Clustering”, et al 2015
- “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift”, 2015
- “DeepID3: Face Recognition With Very Deep Neural Networks”, et al 2015
- “Explaining and Harnessing Adversarial Examples”, et al 2014
- “Understanding Image Representations by Measuring Their Equivariance and Equivalence”, 2014
- “Going Deeper With Convolutions”, et al 2014
- “Very Deep Convolutional Networks for Large-Scale Image Recognition”, 2014
- “ImageNet Large Scale Visual Recognition Challenge”, et al 2014
- “Deep Learning Face Representation by Joint Identification-Verification”, et al 2014
- “One Weird Trick for Parallelizing Convolutional Neural Networks”, 2014
- “Network In Network”, et al 2013
- “R-CNN: Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation”, et al 2013
- “Maxout Networks”, et al 2013
- “ImageNet Classification With Deep Convolutional Neural Networks”, et al 2012
- “Multi-Column Deep Neural Network for Traffic Sign Classification”, Cireşan et al 2012b
- “Multi-Column Deep Neural Networks for Image Classification”, Cireşan et al 2012
- “Building High-Level Features Using Large Scale Unsupervised Learning”, et al 2011
- “DanNet: Flexible, High Performance Convolutional Neural Networks for Image Classification”, et al 2011
- “Hypernetworks [Blog]”, 2024
- “Deconvolution and Checkerboard Artifacts”
- “Hierarchical Object Detection With Deep Reinforcement Learning”
- “Creating a 17 KB Style Transfer Model With Layer Pruning and Quantization”, 2024
- “Now Anyone Can Train Imagenet in 18 Minutes”
- “Cats, Rats, A.I., Oh My!”
- Wikipedia
- Miscellaneous
- Bibliography