âLoLCATs: On Low-Rank Linearizing of Large Language Modelsâ, 2024-10-14 (; similar)â :
Recent works show we can linearize large language models (LLMs)âswapping the quadratic attentions of popular Transformer-based LLMs with sub-quadratic analogs, such as linear attentionâavoiding the expensive pretraining costs. However, linearizing LLMs often degrades model quality, still requires training over billions of tokens, and remains limited to smaller 1.3B to 7B LLMs.
We thus propose Low-rank Linear Conversion via Attention Transfer (LoLCATs), a simple two-step method that improves LLM linearizing quality with orders of magnitudes less memory and compute. We base these steps on two findings. First, we can replace an LLMâs softmax attentions with closely-approximating linear attentions, simply by training the linear attentions to match their softmax counterparts with an output MSE loss (âattention transferâ). Then, this enables adjusting for approximation errors and recovering LLM quality simply with low-rank adaptation (LoRA).
LoLCATs improves linearizing quality, training efficiency, and scalability. We reduce the linearizing quality gap and produce state-of-the-art sub-quadratic LLMs from Llama-3 8B and Mistral-7B v0.1, leading to 20+ points of improvement on 5-shot MMLU. Furthermore, LoLCATs does so with only 0.2% of past methodsâ model parameters and 0.4% of their training tokens.
Finally, we apply LoLCATs to create the first linearized 70B and 405B LLMs (50Ă larger than prior work). When compared with prior approaches under the same compute budgets, LoLCATs improves linearizing quality, closing the gap between linearized and original Llama-3.1 70B and 405B LLMs by 77.8% and 78.1% on 5-shot MMLU.