“‘Autoencoder NN’ Tag”,2019-09-26 ():
![]()
Bibliography for tag
ai/nn/vae, most recent first: 4 related tags, 108 annotations, & 8 links (parent).
- See Also
- Gwern
- Links
- “Scaling the Codebook Size of VQGAN to 100,000 With a Utilization Rate of 99%”, et al 2024
- “Visual Autoregressive Modeling (VAR): Scalable Image Generation via Next-Scale Prediction”, et al 2024
- “Is Model Collapse Inevitable? Breaking the Curse of Recursion by Accumulating Real and Synthetic Data”, et al 2024
- “Neural Network Parameter Diffusion”, et al 2024
- “Attention versus Contrastive Learning of Tabular Data—A Data-Centric Benchmarking”, et al 2024
- “Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet”
- “GIVT: Generative Infinite-Vocabulary Transformers”, et al 2023
- “Sequential Modeling Enables Scalable Learning for Large Vision Models”, et al 2023
- “Finite Scalar Quantization (FSQ): VQ-VAE Made Simple”, et al 2023
- “DeWave: Discrete EEG Waves Encoding for Brain Dynamics to Text Translation”, et al 2023
- “Finding Neurons in a Haystack: Case Studies With Sparse Probing”, et al 2023
- “TANGO: Text-To-Audio Generation Using Instruction-Tuned LLM and Latent Diffusion Model”, et al 2023
- “ACT: Learning Fine-Grained Bimanual Manipulation With Low-Cost Hardware”, et al 2023
- “Bridging Discrete and Backpropagation: Straight-Through and Beyond”, et al 2023
- “Low-Bitrate Redundancy Coding of Speech Using a Rate-Distortion-Optimized Variational Autoencoder”, et al 2022
- “IRIS: Transformers Are Sample-Efficient World Models”, et al 2022
- “Understanding Diffusion Models: A Unified Perspective”, 2022
- “Vector Quantized Image-To-Image Translation”, et al 2022
- “Draft-And-Revise: Effective Image Generation With Contextual RQ-Transformer”, et al 2022
- “UViM: A Unified Modeling Approach for Vision With Learned Guiding Codes”, et al 2022
- “Closing the Gap: Exact Maximum Likelihood Training of Generative Autoencoders Using Invertible Layers (AEF)”, et al 2022
- “AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars”, et al 2022
- “AdaVAE: Exploring Adaptive GPT-2s in Variational Autoencoders for Language Modeling”, et al 2022
- “NaturalSpeech: End-To-End Text to Speech Synthesis With Human-Level Quality”, et al 2022
- “VQGAN-CLIP: Open Domain Image Generation and Editing With Natural Language Guidance”, et al 2022
- “TATS: Long Video Generation With Time-Agnostic VQGAN and Time-Sensitive Transformer”, et al 2022
- “Diffusion Probabilistic Modeling for Video Generation”, et al 2022
- “Polarity Sampling: Quality and Diversity Control of Pre-Trained Generative Networks via Singular Values”, et al 2022
- “Vector-Quantized Image Modeling With Improved VQGAN”, et al 2022
- “Variational Autoencoders Without the Variation”, et al 2022
- “Truncated Diffusion Probabilistic Models and Diffusion-Based Adversarial Autoencoders”, et al 2022
- “MLR: A Model of Working Memory for Latent Representations”, et al 2022
- “CM3: A Causal Masked Multimodal Model of the Internet”, et al 2022
- “Design Guidelines for Prompt Engineering Text-To-Image Generative Models”, 2022b
- “DiffuseVAE: Efficient, Controllable and High-Fidelity Generation from Low-Dimensional Latents”, et al 2022
- “ERNIE-ViLG: Unified Generative Pre-Training for Bidirectional Vision-Language Generation”, et al 2021
- “High-Resolution Image Synthesis With Latent Diffusion Models”, et al 2021
- “Discovering State Variables Hidden in Experimental Data”, et al 2021
- “VQ-DDM: Global Context With Discrete Diffusion in Vector Quantized Modeling for Image Generation”, et al 2021
- “Vector Quantized Diffusion Model for Text-To-Image Synthesis”, et al 2021
- “Passive Non-Line-Of-Sight Imaging Using Optimal Transport”, et al 2021
- “L-Verse: Bidirectional Generation Between Image and Text”, et al 2021
- “Unsupervised Deep Learning Identifies Semantic Disentanglement in Single Inferotemporal Face Patch Neurons”, et al 2021
- “Telling Creative Stories Using Generative Visual Aids”, 2021
- “Illiterate DALL·E Learns to Compose”, et al 2021
- “MeLT: Message-Level Transformer With Masked Document Representations As Pre-Training for Stance Detection”, et al 2021
- “Score-Based Generative Modeling in Latent Space”, et al 2021
- “NWT: Towards Natural Audio-To-Video Generation With Representation Learning”, et al 2021
- “Vector Quantized Models for Planning”, et al 2021
- “VideoGPT: Video Generation Using VQ-VAE and Transformers”, et al 2021
- “TSDAE: Using Transformer-Based Sequential Denoising Autoencoder for Unsupervised Sentence Embedding Learning”, et al 2021
- “Symbolic Music Generation With Diffusion Models”, et al 2021
- “Deep Generative Modeling: A Comparative Review of VAEs, GANs, Normalizing Flows, Energy-Based and Autoregressive Models”, Bond- et al 2021
- “Greedy Hierarchical Variational Autoencoders (GHVAEs) for Large-Scale Video Prediction”, et al 2021
- “CW-VAE: Clockwork Variational Autoencoders”, et al 2021
- “Denoising Diffusion Implicit Models”, et al 2021
- “DALL·E 1: Creating Images from Text: We’ve Trained a Neural Network Called DALL·E That Creates Images from Text Captions for a Wide Range of Concepts Expressible in Natural Language”, et al 2021
- “VQ-GAN: Taming Transformers for High-Resolution Image Synthesis”, et al 2020
- “Multimodal Dynamics Modeling for Off-Road Autonomous Vehicles”, et al 2020
- “Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images”, 2020
- “NVAE: A Deep Hierarchical Variational Autoencoder”, 2020
- “Jukebox: A Generative Model for Music”, et al 2020
- “Jukebox: We’re Introducing Jukebox, a Neural Net That Generates Music, including Rudimentary Singing, As Raw Audio in a Variety of Genres and Artist Styles. We’re Releasing the Model Weights and Code, along With a Tool to Explore the Generated Samples.”, et al 2020
- “RL Agents Implicitly Learning Human Preferences”, 2020
- “Encoding Musical Style With Transformer Autoencoders”, et al 2019
- “Generating Furry Face Art from Sketches Using a GAN”, 2019
- “BART: Denoising Sequence-To-Sequence Pre-Training for Natural Language Generation, Translation, and Comprehension”, et al 2019
- “Bayesian Parameter Estimation Using Conditional Variational Autoencoders for Gravitational-Wave Astronomy”, et al 2019
- “In-Field Whole Plant Maize Architecture Characterized by Latent Space Phenotyping”, et al 2019
- “Generating Diverse High-Fidelity Images With VQ-VAE-2”, et al 2019
- “Bit-Swap: Recursive Bits-Back Coding for Lossless Compression With Hierarchical Latent Variables”, et al 2019
- “Hierarchical Autoregressive Image Models With Auxiliary Decoders”, et al 2019
- “Practical Lossless Compression With Latent Variables Using Bits Back Coding”, et al 2019
- “An Empirical Model of Large-Batch Training”, et al 2018
- “How AI Training Scales”, et al 2018
- “Neural Probabilistic Motor Primitives for Humanoid Control”, et al 2018
- “Piano Genie”, et al 2018
- “IntroVAE: Introspective Variational Autoencoders for Photographic Image Synthesis”, et al 2018
- “InfoNCE: Representation Learning With Contrastive Predictive Coding (CPC)”, et al 2018
- “The Challenge of Realistic Music Generation: Modeling Raw Audio at Scale”, et al 2018
- “Self-Net: Lifelong Learning via Continual Self-Modeling”, et al 2018
- “GANomaly: Semi-Supervised Anomaly Detection via Adversarial Training”, et al 2018
- “XGAN: Unsupervised Image-To-Image Translation for Many-To-Many Mappings”, et al 2017
- “VQ-VAE: Neural Discrete Representation Learning”, et al 2017
- “Vision-Based Multi-Task Manipulation for Inexpensive Robots Using End-To-End Learning from Demonstration”, et al 2017
- “Β-VAE: Learning Basic Visual Concepts With a Constrained Variational Framework”, et al 2017
- “Neural Audio Synthesis of Musical Notes With WaveNet Autoencoders”, et al 2017
- “Prediction and Control With Temporal Segment Models”, et al 2017
- “Discovering Objects and Their Relations from Entangled Scene Representations”, et al 2017
- “Categorical Reparameterization With Gumbel-Softmax”, et al 2016
- “The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables”, et al 2016
- “Improving Sampling from Generative Autoencoders With Markov Chains”, et al 2016
- “Language As a Latent Variable: Discrete Generative Models for Sentence Compression”, 2016
- “Neural Photo Editing With Introspective Adversarial Networks”, et al 2016
- “Early Visual Concept Learning With Unsupervised Deep Learning”, et al 2016
- “Improving Variational Inference With Inverse Autoregressive Flow”, et al 2016
- “How Far Can We Go without Convolution: Improving Fully-Connected Networks”, et al 2015
- “Semi-Supervised Sequence Learning”, 2015
- “MADE: Masked Autoencoder for Distribution Estimation”, et al 2015
- “Analyzing Noise in Autoencoders and Deep Networks”, et al 2014
- “Stochastic Backpropagation and Approximate Inference in Deep Generative Models”, et al 2014
- “Auto-Encoding Variational Bayes”, 2013
- “Building High-Level Features Using Large Scale Unsupervised Learning”, et al 2011
- “A Connection Between Score Matching and Denoising Autoencoders”, 2011
- “Reducing the Dimensionality of Data With Neural Networks”, 2006
- “Generating Large Images from Latent Vectors”, 2024
- “Transformers As Variational Autoencoders”
- “Randomly Traversing the Manifold of Faces (2): Dataset: Labeled Faces in the Wild (LFW); Model: Variational Autoencoder (VAE) / Deep Latent Gaussian Model (DLGM).”
- Sort By Magic
- Wikipedia
- Miscellaneous
- Bibliography