âStar-Transformerâ, 2019-02-25 (; backlinks; similar)â :
Although Transformer has achieved great successes on many NLP tasks, its heavy structure with fully-connected attention connections leads to dependencies on large training data.
In this paper, we present Star-Transformer, a lightweight alternative by careful sparsification. To reduce model complexity, we replace the fully-connected structure with a star-shaped topology, in which every two non-adjacent nodes are connected through a shared relay node. Thus, complexity is reduced from quadratic to linear, while preserving capacity to capture both local composition and long-range dependency.
The experiments on 4 tasks (22 datasets) show that Star-Transformer achieved improvements against the standard Transformer for the modestly sized datasets.
View PDF: