“Most Language Models Can Be Poets Too: An AI Writing Assistant and Constrained Text Generation Studio”, 2022-10-12 (; backlinks):
Despite rapid advancement in the field of Constrained Natural Language Generation, little time has been spent on exploring the potential of language models which have had their vocabularies lexically, semantically, and/or phonetically constrained. We find that most language models generate compelling text even under heavy constraints.
We present a simple and universally applicable technique for modifying the output of a language model by compositionally applying filter functions to the language models vocabulary before a unit of text is generated. This approach is plug-and-play and requires no modification to the model. To showcase the value of this technique, we present an easy to use AI writing assistant called “Constrained Text Generation Studio” (CTGS). CTGS allows users to generate or choose from text with any combination of a wide variety of constraints, such as banning a particular letter, forcing the generated words to have a certain number of syllables, and/or forcing the words to be partial anagrams of another word.
We introduce a novel dataset of prose that omits the letter “e”.
We show that our method results in strictly superior performance compared to fine-tuning alone on this dataset.
We also present a Huggingface “space” web-app presenting this technique called Gadsby. The code is available to the public at Github.
…We choose GPT-2-medium because of its relatively well-understood fine-tunability. We compare the untrained GPT-2 model to the regularly fine-tuned model, and the over-fine-tuned model
…4. Dataset without the “e”:
One of the issues that large language models present for constrained writers is that even when heavily fine-tuned on a particular dataset, they frequently ignore their constraints. For example, poetry models that were fine-tuned on the works of William Shakespeare frequently stumble and fail to maintain rhyme or meter.6 We show that language models, which are fine-tuned even on the simple lexical constraint of omitting the letter “e”, still occasionally ignore their constraints. In fact, even when these models are over-trained to an absurd degree, complete adherence to these constraints is unlikely.
Such behavior motivates the creation of datasets which include some forms of hard lexical, semantic, or phonetic constraints. By doing so, we can measure how often language models ignore them, and more importantly, we can show that this method of filtering out these tokens before the generation step leads to strictly better performance and eliminates these kinds of errors.
We present a dataset, called Lipogram-e, which consists of all known complete book-length English works which do not use the letter “e”. This dataset includes all of Gadsby by Ernest Vincent Wright, all of A Void by Georges Perec, and almost all of Eunoia by Christian Bok. We name it “Lipogram-e” because a lipogram is a text where the author omits one or more letters from the alphabet.
While it may be possible to produce a dataset without the letter “e” by simply computationally looking through an existing large scale dataset for sentences which match that constraint, doing so would result in jumbled and incoherent training examples, with little relation to each other. By contrast, books and prose written with constraints have clear, coherent narratives. We chose the constraint of banning “e” because it is extremely easy to computationally verify and because there is no potential for error from the filter function.