“‘Continual Learning’ Tag”,2019-12-21 (; backlinks):
![]()
Bibliography for tag
reinforcement-learning/meta-learning/continual-learning, most recent first: 1 related tag, 54 annotations, & 4 links (parent).
- See Also
- Links
- “LoRA vs Full Fine-Tuning: An Illusion of Equivalence”, et al 2024
- “Investigating Learning-Independent Abstract Reasoning in Artificial Neural Networks”, 2024
- “How Do Large Language Models Acquire Factual Knowledge During Pretraining?”, et al 2024
- “Is Model Collapse Inevitable? Breaking the Curse of Recursion by Accumulating Real and Synthetic Data”, et al 2024
- “Simple and Scalable Strategies to Continually Pre-Train Large Language Models”, et al 2024
- “Online Adaptation of Language Models With a Memory of Amortized Contexts (MAC)”, et al 2024
- “When Scaling Meets LLM Finetuning: The Effect of Data, Model and Finetuning Method”, et al 2024
- “Investigating Continual Pretraining in Large Language Models: Insights and Implications”, et al 2024
- “RAG vs Fine-Tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture”, et al 2024
- “LLaMA Pro: Progressive LLaMA With Block Expansion”, et al 2024
- “Large Language Models Relearn Removed Concepts”, et al 2024
- “Language Model Alignment With Elastic Reset”, et al 2023
- “In-Context Pretraining (ICP): Language Modeling Beyond Document Boundaries”, et al 2023
- “Loss of Plasticity in Deep Continual Learning (Continual Backpropagation)”, et al 2023
- “Continual Diffusion: Continual Customization of Text-To-Image Diffusion With C-LoRA”, et al 2023
- “Understanding Plasticity in Neural Networks”, et al 2023
- “The Forward-Forward Algorithm: Some Preliminary Investigations”, 2022
- “Broken Neural Scaling Laws”, et al 2022
- “Exclusive Supermask Subnetwork Training for Continual Learning”, 2022
- “Learn the Time to Learn: Replay Scheduling in Continual Learning”, et al 2022
- “On the Effectiveness of Compact Biomedical Transformers (✱BioBERT)”, et al 2022
- “Don’t Stop Learning: Towards Continual Learning for the CLIP Model”, et al 2022
- “Fleet-DAgger: Interactive Robot Fleet Learning With Scalable Human Supervision”, et al 2022
- “Task-Agnostic Continual Reinforcement Learning: In Praise of a Simple Baseline (3RL)”, et al 2022
- “CT0: Fine-Tuned Language Models Are Continual Learners”, et al 2022
- “Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models”, et al 2022
- “Continual Pre-Training Mitigates Forgetting in Language and Vision”, et al 2022
- “Continual Learning With Foundation Models: An Empirical Study of Latent Replay”, et al 2022
- “DualPrompt: Complementary Prompting for Rehearsal-Free Continual Learning”, et al 2022
- “Effect of Scale on Catastrophic Forgetting in Neural Networks”, et al 2022
- “The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns via Spotlights of Attention”, et al 2022
- “Learning to Prompt for Continual Learning”, et al 2021
- “An Empirical Investigation of the Role of Pre-Training in Lifelong Learning”, et al 2021
- “The Geometry of Representational Drift in Natural and Artificial Neural Networks”, et al 2021
- “Wide Neural Networks Forget Less Catastrophically”, et al 2021
- “Lifelong Pretraining: Continually Adapting Language Models to Emerging Corpora”, et al 2021
- “Continuous Coordination As a Realistic Scenario for Lifelong Learning”, et al 2021
- “Inductive Biases for Deep Learning of Higher-Level Cognition”, 2020
- “Learning from the Past: Meta-Continual Learning With Knowledge Embedding for Jointly Sketch, Cartoon, and Caricature Face Recognition”, et al 2020b
- “Meta-Learning through Hebbian Plasticity in Random Networks”, 2020
- “Learning to Learn With Feedback and Local Plasticity”, Lindsey & Litwin-2020
- “Understanding the Role of Training Regimes in Continual Learning”, et al 2020
- “Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks”, et al 2020
- “Never Stop Learning: The Effectiveness of Fine-Tuning in Robotic Reinforcement Learning”, et al 2020
- “On Warm-Starting Neural Network Training”, 2019
- “Gated Linear Networks”, et al 2019
- “Learning and Evaluating General Linguistic Intelligence”, et al 2019
- “Self-Net: Lifelong Learning via Continual Self-Modeling”, et al 2018
- “Unicorn: Continual Learning With a Universal, Off-Policy Agent”, et al 2018
- “Meta Networks”, 2017
- “PathNet: Evolution Channels Gradient Descent in Super Neural Networks”, et al 2017
- “Overcoming Catastrophic Forgetting in Neural Networks”, et al 2016
- “Repeat Before Forgetting: Spaced Repetition for Efficient and Effective Training of Neural Networks”
- “Can LLMs Learn from a Single Example?”
- Sort By Magic
- Miscellaneous
- Bibliography