āContrastive Search Is What You Need For Neural Text Generationā, 2022-10-25 ()ā :
[blog+demo/code] Generating text with autoregressive language models (LMs) is of great importance to many natural language processing (NLP) applications. Previous solutions for this task often produce text that contains degenerative expressions or lacks semantic consistency. Recently, et al 2022 introduced a new decoding method, contrastive search, based on the isotropic representation space of the language model [ie. their representations reside in a narrow subset of the entire space] and obtained new state-of-the-art on various benchmarks. Additionally, et al 2022 argued that the representations of autoregressive LMs (eg. GPT-2) are intrinsically anisotropic which is also shared by previous studies. Therefore, to ensure the language model follows an isotropic distribution, et al 2022 proposed a contrastive learning scheme, SimCTG, which calibrates the language modelās representations through additional training.
In this study, we first answer the question: āAre autoregressive LMs really anisotropic?ā. To this end, we extensively evaluate the isotropy of LMs across 16 major languages.
Surprisingly, we find that the anisotropic problem only exists in the two specific English GPT-2-small/medium models. On the other hand, all other evaluated LMs are naturally isotropic which is in contrast to the conclusion drawn by previous studies.
Based on our findings, we further assess the contrastive search decoding method using off-the-shelf LMs on 4 generation tasks across 16 languages.
Our experimental results demonstrate that contrastive search outperforms previous decoding methods without any additional training. More notably, on 12 out of the 16 evaluated languages, contrastive search performs comparably with human-level performances as judged by human evaluations.