- See Also
-
Links
- “MUX-PLMs: Pre-training Language Models With Data Multiplexing”, Et Al 2023
- “DataMUX: Data Multiplexing for Neural Networks”, Et Al 2023
- “Noise Transforms Feed-Forward Networks into Sparse Coding Networks”, 2022
- “Exploring Low Rank Training of Deep Neural Networks”, Et Al 2022
- “Monolith: Real Time Recommendation System With Collisionless Embedding Table”, Et Al 2022
- “More ConvNets in the 2020s: Scaling up Kernels Beyond 51×51 Using Sparsity (SLaK)”, Et Al 2022
- “Building Machine Translation Systems for the Next Thousand Languages”, Et Al 2022
- “Monarch: Expressive Structured Matrices for Efficient and Accurate Training”, Et Al 2022
- “NeuPL: Neural Population Learning”, Et Al 2022
- “Datamodels: Predicting Predictions from Training Data”, Et Al 2022
- “Spiking Neural Networks and Their Applications: A Review”, Et Al 2022
- “Persia: An Open, Hybrid System Scaling Deep Learning-based Recommenders up to 100 Trillion Parameters”, Et Al 2021
- “EvilModel: Hiding Malware Inside of Neural Network Models”, Et Al 2021
- “LoRA: Low-Rank Adaptation of Large Language Models”, Et Al 2021
- “Clusterability in Neural Networks”, Et Al 2021
- “Sparsity in Deep Learning: Pruning and Growth for Efficient Inference and Training in Neural Networks”, Et Al 2021
- “Scaling down Deep Learning”, 2020
- “Extreme Model Compression for On-device Natural Language Understanding”, Et Al 2020
- “Neural Arithmetic Units”, 2020
- “Learning to Seek: Autonomous Source Seeking With Deep Reinforcement Learning Onboard a Nano Drone Microcontroller”, Et Al 2019
- “Weight Agnostic Neural Networks”, 2019
- “StyleNAS: An Empirical Study of Neural Architecture Search to Uncover Surprisingly Fast End-to-End Universal Style Transfer Networks”, An Et Al 2019
- “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks”, 2019
- “Superposition of Many Models into One”, Et Al 2019
- “Playing Atari With Six Neurons”, Et Al 2018
- “Measuring the Intrinsic Dimension of Objective Landscapes”, Et Al 2018
- “SqueezeNext: Hardware-Aware Neural Network Design”, Et Al 2018
- “Wide Compression: Tensor Ring Nets”, Et Al 2018
- “Intriguing Properties of Randomly Weighted Networks: Generalizing While Learning Next to Nothing”, 2018
- “Fix Your Classifier: the Marginal Value of Training the Last Weight Layer”, Et Al 2018
- “Learning Compact Recurrent Neural Networks With Block-Term Tensor Decomposition”, Et Al 2017
- “XUnit: Learning a Spatial Activation Function for Efficient Image Restoration”, Et Al 2017
- “Natural Language Processing With Small Feed-Forward Networks”, Et Al 2017
- “ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices”, Et Al 2017
- “Shake-Shake Regularization of 3-branch Residual Networks”, 2017
- “Bonsai: Resource-efficient Machine Learning in 2KB RAM for the Internet of Things”, Kumar & Al 2017
- “Tensorizing Neural Networks”, Et Al 2015
- “Eight Pairs of Descending Visual Neurons in the Dragonfly Give Wing Motor Centers Accurate Population Vector of Prey Direction”, Gonzalez-Et Al 2013
- “Networks of Spiking Neurons: The Third Generation of Neural Network Models”, 1997
- “Delivering Real-time AI in the Palm of Your Hand”
- Wikipedia
- Miscellaneous
- Link Bibliography
Neural nets are extremely ‘overparameterized’ in the sense that they have orders of magnitude more parameters than necessary to solve the problems they are trained on, as can be proven by the regular improvements in training smaller/faster but still performant networks but also in directly creating smaller neural nets with similar or identical performance on those problems. Major techniques are: deleting parameters (pruning)/reducing precision of the numeric encoding (quantizing)/training a smaller network from scratch using the original large network somehow (distillation).
Mysteriously, these smaller networks typically cannot be trained from scratch; performance gains can be obtained without the original data; models can be trained to imitate themselves in self-distillation; despite this indicating overfitting ought to be a major concern, they generalize well; and many of these smaller networks are in some sense already present in the original neural network. This is frequently taken to indicate some sort of blessing of scale in large NNs having smoother loss landscapes, which simple optimizers can successfully traverse to good optima no matter how hard the problem, as compared to smaller networks which may wind up ‘trapped’ at a bad place with no free parameters to let it slip around obstacles and find some way to improve (much less the loss landscape of equivalently powerful but extremely brittle encodings such as Brainf—k or assembler programs). As well as their great theoretical interest—How can we train these small models directly? What does this tell us about how NNs work?—such smaller NNs are critical to practical real-world deployment to servers & smartphones at scale, the design of accelerator hardware supporting reduced-precision operations, and also are an interesting case of capability growth for AI risk: as soon as any NN exists which can achieve performance goal X, it is likely that a much more efficient NN (potentially orders of magnitude smaller or faster) can be created to achieve X thereafter. (These are merely one way that your software can be much faster.)
Some examples of NNs being compressed in size or FLOPs by anywhere from 50% to ~17,000% (an incomplete bibliography, merely papers I have noted during my reading) below.
See Also
Links
“MUX-PLMs: Pre-training Language Models With Data Multiplexing”, Et Al 2023
“MUX-PLMs: Pre-training Language Models with Data Multiplexing”, 2023-02-24 ( ; similar; bibliography)
“DataMUX: Data Multiplexing for Neural Networks”, Et Al 2023
“DataMUX: Data Multiplexing for Neural Networks”, 2023-01-13 ( ; backlinks; similar)
“Noise Transforms Feed-Forward Networks into Sparse Coding Networks”, 2022
“Noise Transforms Feed-Forward Networks into Sparse Coding Networks”, 2022-09-29 ( ; backlinks; similar)
“Exploring Low Rank Training of Deep Neural Networks”, Et Al 2022
“Exploring Low Rank Training of Deep Neural Networks”, 2022-09-27 (similar)
“Monolith: Real Time Recommendation System With Collisionless Embedding Table”, Et Al 2022
“Monolith: Real Time Recommendation System With Collisionless Embedding Table”, 2022-09-16 ( ; similar)
“More ConvNets in the 2020s: Scaling up Kernels Beyond 51×51 Using Sparsity (SLaK)”, Et Al 2022
“More ConvNets in the 2020s: Scaling up Kernels Beyond 51×51 using Sparsity (SLaK)”, 2022-07-07 (similar; bibliography)
“Building Machine Translation Systems for the Next Thousand Languages”, Et Al 2022
“Building Machine Translation Systems for the Next Thousand Languages”, 2022-05-09 ( ; similar; bibliography)
“Monarch: Expressive Structured Matrices for Efficient and Accurate Training”, Et Al 2022
“Monarch: Expressive Structured Matrices for Efficient and Accurate Training”, 2022-04-01 ( ; similar; bibliography)
“NeuPL: Neural Population Learning”, Et Al 2022
“NeuPL: Neural Population Learning”, 2022-02-15 ( ; similar; bibliography)
“Datamodels: Predicting Predictions from Training Data”, Et Al 2022
“Datamodels: Predicting Predictions from Training Data”, 2022-02-01 (similar)
“Spiking Neural Networks and Their Applications: A Review”, Et Al 2022
“Spiking Neural Networks and Their Applications: A Review”, 2022 ( ; similar)
“Persia: An Open, Hybrid System Scaling Deep Learning-based Recommenders up to 100 Trillion Parameters”, Et Al 2021
“Persia: An Open, Hybrid System Scaling Deep Learning-based Recommenders up to 100 Trillion Parameters”, 2021-11-10 ( ; backlinks; similar)
“EvilModel: Hiding Malware Inside of Neural Network Models”, Et Al 2021
“EvilModel: Hiding Malware Inside of Neural Network Models”, 2021-07-19 ( ; similar)
“LoRA: Low-Rank Adaptation of Large Language Models”, Et Al 2021
“LoRA: Low-Rank Adaptation of Large Language Models”, 2021-06-17 ( ; similar; bibliography)
“Clusterability in Neural Networks”, Et Al 2021
“Clusterability in Neural Networks”, 2021-03-04 (similar)
“Sparsity in Deep Learning: Pruning and Growth for Efficient Inference and Training in Neural Networks”, Et Al 2021
“Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks”, 2021-01-31 (similar)
“Scaling down Deep Learning”, 2020
“Scaling down Deep Learning”, 2020-12-01 ( ; backlinks; similar; bibliography)
“Extreme Model Compression for On-device Natural Language Understanding”, Et Al 2020
“Extreme Model Compression for On-device Natural Language Understanding”, 2020-11-30 (similar)
“Neural Arithmetic Units”, 2020
“Neural Arithmetic Units”, 2020-01-14 ( ; similar)
“Learning to Seek: Autonomous Source Seeking With Deep Reinforcement Learning Onboard a Nano Drone Microcontroller”, Et Al 2019
“Learning to Seek: Autonomous Source Seeking with Deep Reinforcement Learning Onboard a Nano Drone Microcontroller”, 2019-09-25 ( ; similar)
“Weight Agnostic Neural Networks”, 2019
“Weight Agnostic Neural Networks”, 2019-06-11 ( ; similar)
“StyleNAS: An Empirical Study of Neural Architecture Search to Uncover Surprisingly Fast End-to-End Universal Style Transfer Networks”, An Et Al 2019
“StyleNAS: An Empirical Study of Neural Architecture Search to Uncover Surprisingly Fast End-to-End Universal Style Transfer Networks”, 2019-06-06 (backlinks; similar)
“EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks”, 2019
“EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks”, 2019-05-28 ( ; similar; bibliography)
“Superposition of Many Models into One”, Et Al 2019
“Superposition of many models into one”, 2019-02-14 (similar)
“Playing Atari With Six Neurons”, Et Al 2018
“Playing Atari with Six Neurons”, 2018-06-04 ( ; similar)
“Measuring the Intrinsic Dimension of Objective Landscapes”, Et Al 2018
“Measuring the Intrinsic Dimension of Objective Landscapes”, 2018-04-24 ( ; similar)
“SqueezeNext: Hardware-Aware Neural Network Design”, Et Al 2018
“SqueezeNext: Hardware-Aware Neural Network Design”, 2018-03-23 (similar)
“Wide Compression: Tensor Ring Nets”, Et Al 2018
“Wide Compression: Tensor Ring Nets”, 2018-02-25 (similar)
“Intriguing Properties of Randomly Weighted Networks: Generalizing While Learning Next to Nothing”, 2018
“Intriguing Properties of Randomly Weighted Networks: Generalizing while Learning Next to Nothing”, 2018-01-25 (similar)
“Fix Your Classifier: the Marginal Value of Training the Last Weight Layer”, Et Al 2018
“Fix your classifier: the marginal value of training the last weight layer”, 2018-01-14 (similar)
“Learning Compact Recurrent Neural Networks With Block-Term Tensor Decomposition”, Et Al 2017
“Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition”, 2017-12-14 ( ; similar)
“XUnit: Learning a Spatial Activation Function for Efficient Image Restoration”, Et Al 2017
“xUnit: Learning a Spatial Activation Function for Efficient Image Restoration”, 2017-11-17 (similar)
“Natural Language Processing With Small Feed-Forward Networks”, Et Al 2017
“Natural Language Processing with Small Feed-Forward Networks”, 2017-08-01
“ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices”, Et Al 2017
“ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices”, 2017-07-04 (similar)
“Shake-Shake Regularization of 3-branch Residual Networks”, 2017
“Shake-Shake regularization of 3-branch residual networks”, 2017-03-15 (similar)
“Bonsai: Resource-efficient Machine Learning in 2KB RAM for the Internet of Things”, Kumar & Al 2017
“Tensorizing Neural Networks”, Et Al 2015
“Tensorizing Neural Networks”, 2015-09-22 ( ; backlinks; similar)
“Eight Pairs of Descending Visual Neurons in the Dragonfly Give Wing Motor Centers Accurate Population Vector of Prey Direction”, Gonzalez-Et Al 2013
“Eight pairs of descending visual neurons in the dragonfly give wing motor centers accurate population vector of prey direction”, 2013-01-08 ( ; backlinks; similar)
“Networks of Spiking Neurons: The Third Generation of Neural Network Models”, 1997
“Networks of spiking neurons: The third generation of neural network models”, 1997-12 ( ; similar)
“Delivering Real-time AI in the Palm of Your Hand”
Wikipedia
Miscellaneous
-
2018-cheng.pdf
2018 -
http://www.mitpressjournals.org/doi/pdf/10.1162/neco_a_00990
-
https://ai.facebook.com/blog/a-highly-efficient-real-time-text-to-speech-system-deployed-on-cpus/
-
https://ai.googleblog.com/2018/05/custom-on-device-ml-models.html
-
https://ai.googleblog.com/2019/03/an-all-neural-on-device-speech.html
-
https://ai.googleblog.com/2021/10/grammar-correction-as-you-type-on-pixel.html
-
https://ai.googleblog.com/2022/03/auto-generated-summaries-in-google-docs.html
-
https://ai.googleblog.com/2022/08/efficient-sequence-modeling-for-on.html
-
https://blog.roblox.com/2020/05/scaled-bert-serve-1-billion-daily-requests-cpus/
-
https://blog.tensorflow.org/2020/03/higher-accuracy-on-vision-models-with-efficientnet-lite.html
-
https://neuralmagic.com/blog/bert-large-prune-once-for-distilbert-inference-performance/
Link Bibliography
-
https://arxiv.org/abs/2302.12441
: “MUX-PLMs: Pre-training Language Models With Data Multiplexing”, Vishvak Murahari, Ameet Deshpande, Carlos E. Jimenez, Izhak Shafran, Mingqiu Wang, Yuan Cao, Karthik Narasimhan: -
https://arxiv.org/abs/2207.03620
: “More ConvNets in the 2020s: Scaling up Kernels Beyond 51×51 Using Sparsity (SLaK)”, : -
https://arxiv.org/abs/2205.03983#google
: “Building Machine Translation Systems for the Next Thousand Languages”, : -
https://arxiv.org/abs/2204.00595
: “Monarch: Expressive Structured Matrices for Efficient and Accurate Training”, : -
https://arxiv.org/abs/2202.07415#deepmind
: “NeuPL: Neural Population Learning”, Siqi Liu, Luke Marris, Daniel Hennes, Josh Merel, Nicolas Heess, Thore Graepel: -
https://arxiv.org/abs/2106.09685
: “LoRA: Low-Rank Adaptation of Large Language Models”, Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen: -
https://greydanus.github.io/2020/12/01/scaling-down/
: “Scaling down Deep Learning”, Sam Greydanus: -
https://arxiv.org/abs/1905.11946#google
: “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks”, Mingxing Tan, Quoc V. Le: