“SwitchHead: Accelerating Transformers With Mixture-Of-Experts Attention”, Róbert Csordás, Piotr Piękos, Kazuki Irie, Jürgen Schmidhuber2023-12-13 (, , )⁠:

The costly self-attention layers in modern Transformers require memory and compute quadratic in sequence length. Existing approximation methods usually underperform and fail to obtain speedups in practice.

Here we present SwitchHead—a novel method that reduces both compute and memory requirements and achieves wall-clock speedup, while matching the language modeling performance of baseline Transformers with the same parameter budget.

SwitchHead uses Mixture-of-Experts (MoE) layers for the value and output projections and requires 4–8× fewer attention matrices than standard Transformers.

Our novel attention can also be combined with MoE MLP layers, resulting in an efficient fully-MoE “SwitchAll” Transformer model.

Our code is public.