“The Unreasonable Ineffectiveness of the Deeper Layers”, 2024-03-26 ():
We empirically study a simple layer-pruning strategy for popular families of open-weight pretrained LLMs, finding minimal degradation of performance on different question-answering benchmarks until after a large fraction (up to half) of the layers are removed. This finding is significant because it indicates that many of the deeper layers in these models might not be as essential to their performance as previously thought.
To prune these models, we identify the optimal block of layers to prune by considering similarity across layers; then, to “heal” the damage, we perform a small amount of finetuning. Specifically, we use parameter-efficient finetuning (PEFT) methods, specifically quantization and Low Rank Adapters (QLoRA), such that each of our experiments can be performed on a single A100 GPU. This approach is designed to maintain, or even improve, the efficiency and effectiveness of the pruning process.
From a practical perspective, these results suggest that layer pruning methods can complement other PEFT strategies to further reduce computational resources of finetuning on the one hand, and can improve the memory and latency of inference on the other hand. This has implications for deploying large models in resource-constrained environments.
From a scientific perspective, the robustness of these LLMs to the deletion of layers implies either that current pretraining methods are not properly leveraging the parameters in the deeper layers of the network or that the shallow layers play a critical role in storing knowledge. This finding opens new avenues for research into the efficiency of neural network architectures.