âMemory-Assisted Prompt Editing to Improve GPT-3 After Deploymentâ, 2022-01-16 (; similar)â :
Large LMs such as GPT-3, while powerful, are not immune to mistakes, but are prohibitively costly to retrain. One failure mode is misinterpreting a userâs instruction (eg. GPT-3 interpreting âWhat word is similar to good?â to mean a homonym, while the user intended a synonym). Our goal is to allow users to correct such errors directly through interactionâwithout retraining.
Our approach pairs GPT-3 with a growing memory of cases where the model misunderstood the userâs intent and was provided with feedback, clarifying the instruction. Given a new query, our memory-enhanced GPT-3 uses feedback from similar, prior queries to enrich the prompt.
Through simple proof-of-concept experiments, we show how a (simulated) user can interactively teach a deployed GPT-3, doubling its accuracy on basic lexical tasks (eg. generate a synonym) where users query in different, novel (often misunderstood) ways. In such scenarios, memory helps avoid repeating similar past mistakes.
Our simple idea is a first step towards strengthening deployed models, potentially broadening their utility.
All the code and data is available at Github.