“‘Active Learning’ Tag”,2019-12-15 ():
![]()
Bibliography for tag
reinforcement-learning/exploration/active-learning, most recent first: 2 related tags, 109 annotations, & 25 links (parent).
- See Also
- Links
- “Probing the Decision Boundaries of In-Context Learning in Large Language Models”, et al 2024
- “Beyond Model Collapse: Scaling Up With Synthesized Data Requires Reinforcement”, et al 2024
- “Artificial Intelligence for Retrosynthetic Planning Needs Both Data and Expert Knowledge”, Strieth- et al 2024
- “Sparse Universal Transformer”, et al 2023
- “Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models”, et al 2023
- “AlpaGasus: Training A Better Alpaca With Fewer Data”, et al 2023
- “Instruction Mining: High-Quality Instruction Data Selection for Large Language Models”, et al 2023
- “No Train No Gain: Revisiting Efficient Training Algorithms For Transformer-Based Language Models”, et al 2023
- “Estimating Label Quality and Errors in Semantic Segmentation Data via Any Model”, 2023
- “Self Expanding Neural Networks”, et al 2023
- “DoReMi: Optimizing Data Mixtures Speeds Up Language Model Pretraining”, et al 2023
- “Chatting With GPT-3 for Zero-Shot Human-Like Mobile Automated GUI Testing”, et al 2023
- “TinyStories: How Small Can Language Models Be and Still Speak Coherent English?”, 2023
- “Q2d: Turning Questions into Dialogs to Teach Models How to Search”, et al 2023
- “Segment Anything”, et al 2023
- “Scaling Expert Language Models With Unsupervised Domain Discovery”, et al 2023
- “Modern Bayesian Experimental Design”, et al 2023
- “Unifying Approaches in Active Learning and Active Sampling via Fisher Information and Information-Theoretic Quantities”, 2023
- “Embedding Synthetic Off-Policy Experience for Autonomous Driving via Zero-Shot Curricula”, et al 2022
- “CDCD: Continuous Diffusion for Categorical Data”, et al 2022
- “Query by Committee Made Real”, Gilad- et al 2022
- “Weakly Supervised Structured Output Learning for Semantic Segmentation”, et al 2022
- “The Power of Ensembles for Active Learning in Image Classification”, et al 2022
- “Multi-Class Active Learning for Image Classification”, et al 2022
- “Multi-Class Active Learning by Uncertainty Sampling With Diversity Maximization”, et al 2022
- “The Unreasonable Effectiveness of Fully-Connected Layers for Low-Data Regimes”, et al 2022
- “Detecting Label Errors in Token Classification Data”, 2022
- “RHO-LOSS: Prioritized Training on Points That Are Learnable, Worth Learning, and Not Yet Learnt”, et al 2022
- “Bamboo: Building Mega-Scale Vision Dataset Continually With Human-Machine Synergy”, et al 2022
- “Multi-Task Self-Training for Learning General Representations”, et al 2021
- “Predictive Coding: a Theoretical and Experimental Review”, et al 2021
- “Dataset Distillation With Infinitely Wide Convolutional Networks”, et al 2021
- “Stochastic Batch Acquisition: A Simple Baseline for Deep Active Learning”, et al 2021
- “Adapting the Function Approximation Architecture in Online Reinforcement Learning”, 2021
- “B-Pref: Benchmarking Preference-Based Reinforcement Learning”, et al 2021
- “Fully General Online Imitation Learning”, et al 2021
- “When Do Curricula Work?”, et al 2020
- “Dataset Meta-Learning from Kernel Ridge-Regression”, et al 2020
- “Dataset Cartography: Mapping and Diagnosing Datasets With Training Dynamics”, et al 2020
- “BanditPAM: Almost Linear Time k-Medoids Clustering via Multi-Armed Bandits”, et al 2020
- “Exploring Bayesian Optimization: Breaking Bayesian Optimization into Small, Sizeable Chunks”, 2020
- “Small-GAN: Speeding Up GAN Training Using Core-Sets”, et al 2019
- “A Deep Active Learning System for Species Identification and Counting in Camera Trap Images”, et al 2019
- “On Warm-Starting Neural Network Training”, 2019
- “Accelerating Deep Learning by Focusing on the Biggest Losers”, et al 2019
- “Data Valuation Using Reinforcement Learning”, et al 2019
- “BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning”, et al 2019
- “BADGE: Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds”, et al 2019
- “Population Based Augmentation: Efficient Learning of Augmentation Policy Schedules”, et al 2019
- “Learning Loss for Active Learning”, 2019
- “A Recipe for Training Neural Networks”, 2019
- “ProductNet: a Collection of High-Quality Datasets for Product Representation Learning”, et al 2019
- “End-To-End Robotic Reinforcement Learning without Reward Engineering”, et al 2019
- “Data Shapley: Equitable Valuation of Data for Machine Learning”, 2019
- “Learning from Dialogue After Deployment: Feed Yourself, Chatbot!”, et al 2019
- “The Open Images Dataset V4: Unified Image Classification, Object Detection, and Visual Relationship Detection at Scale”, et al 2018
- “Computational Mechanisms of Curiosity and Goal-Directed Exploration”, et al 2018
- “Conditional Neural Processes”, et al 2018
- “Meta-Learning Transferable Active Learning Policies by Deep Reinforcement Learning”, et al 2018
- “More Than a Feeling: Learning to Grasp and Regrasp Using Vision and Touch”, et al 2018
- “Fingerprint Policy Optimization for Robust Reinforcement Learning”, et al 2018
- “AutoAugment: Learning Augmentation Policies from Data”, et al 2018
- “Optimization, Fast and Slow: Optimally Switching between Local and Bayesian Optimization”, et al 2018
- “Estimate and Replace: A Novel Approach to Integrating Deep Neural Networks With Existing Applications”, et al 2018
- “Active Learning With Partial Feedback”, et al 2018
- “Active, Continual Fine Tuning of Convolutional Neural Networks for Reducing Annotation Efforts”, et al 2018
- “Less Is More: Sampling Chemical Space With Active Learning”, et al 2018
- “The Eighty Five Percent Rule for Optimal Learning”, et al 2018
- “ScreenerNet: Learning Self-Paced Curriculum for Deep Neural Networks”, 2018
- “Learning a Generative Model for Validity in Complex Discrete Structures”, et al 2017
- “Learning by Asking Questions”, et al 2017
- “BlockDrop: Dynamic Inference Paths in Residual Networks”, et al 2017
- “Mastering the Dungeon: Grounded Language Learning by Mechanical Turker Descent”, et al 2017
- “Classification With Costly Features Using Deep Reinforcement Learning”, et al 2017
- “Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-Sensitive Learning”, et al 2017
- “Why Pay More When You Can Pay Less: A Joint Learning Framework for Active Feature Acquisition and Classification”, et al 2017
- “Learning to Look Around: Intelligently Exploring Unseen Environments for Unknown Tasks”, 2017
- “Active Learning for Convolutional Neural Networks: A Core-Set Approach”, 2017
- “Interpretable Active Learning”, et al 2017
- “Revisiting Unreasonable Effectiveness of Data in Deep Learning Era”, et al 2017
- “A Tutorial on Thompson Sampling”, et al 2017
- “Learning to Learn from Noisy Web Videos”, et al 2017
- “Teaching Machines to Describe Images via Natural Language Feedback”, 2017
- “Ask the Right Questions: Active Question Reformulation With Reinforcement Learning”, et al 2017
- “BAM! The Behance Artistic Media Dataset for Recognition Beyond Photography”, et al 2017
- “PBO: Preferential Bayesian Optimization”, et al 2017
- “OHEM: Training Region-Based Object Detectors With Online Hard Example Mining”, et al 2016
- “The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition”, et al 2015
- “LSUN: Construction of a Large-Scale Image Dataset Using Deep Learning With Humans in the Loop”, et al 2015
- “Dropout As a Bayesian Approximation: Representing Model Uncertainty in Deep Learning”, 2015
- “Just Sort It! A Simple and Effective Approach to Active Preference Learning”, 2015
- “Learning With Intelligent Teacher: Similarity Control and Knowledge Transfer”, 2015
- “Minimax Analysis of Active Learning”, 2014
- “Algorithmic and Human Teaching of Sequential Decision Tasks”, 2012
- “Bayesian Active Learning for Classification and Preference Learning”, et al 2011
- “Rates of Convergence in Active Learning”, 2011
- “The True Sample Complexity of Active Learning”, et al 2010
- “Active Testing for Face Detection and Localization”, 2010
- “The Wisdom of the Few: a Collaborative Filtering Approach Based on Expert Opinions from the Web”, et al 2009
- “Learning and Example Selection for Object and Pattern Detection”, 1995
- “Information-Based Objective Functions for Active Data Selection”, Mac1992
- “Active Learning Literature Survey”
- “Brief Summary of the Panel Discussion at DL Workshop @ICML 2015”
- “Active Learning”
- “Aurora’s Approach to Development”
- “Active Learning for High Dimensional Inputs Using Bayesian Convolutional Neural Networks”
- “AI-Guided Robots Are Ready to Sort Your Recyclables”
- “When Self-Driving Cars Can’t Help Themselves, Who Takes the Wheel?”
- “How a Feel-Good AI Story Went Wrong in Flint: A Machine-Learning Model Showed Promising Results, but City Officials and Their Engineering Contractor Abandoned It.”
- Sort By Magic
- Wikipedia
- Miscellaneous
- Bibliography