“Bot-Adversarial Dialogue for Safe Conversational Agents”, 2021 ():
Conversational agents trained on large unlabeled corpora of human interactions will learn patterns and mimic behaviors therein, which include offensive or otherwise toxic behavior.
We introduce a new human-and-model-in-the-loop framework for evaluating the toxicity of such models, and compare a variety of existing methods in both the cases of non-adversarial and adversarial users that expose their weaknesses. We then go on to propose two novel methods for safe conversational agents, by either training on data from our new human-and-model-in-the-loop framework in a two-stage system, or “baking-in” safety to the generative model itself.
We find our new techniques are (1) safer than existing models; while (2) maintaining usability metrics such as engagingness relative to state-of-the-art chatbots. In contrast, we expose serious safety issues in existing standard systems like GPT-2, DialoGPT, and Blender Bot.
See Also:
Can You Put it All Together: Evaluating Conversational Agents’ Ability to Blend Skills
What makes a good conversation? How controllable attributes affect human judgments
Models in the Loop: Aiding Crowdworkers with Generative Annotation Assistants
Learning Human Objectives by Evaluating Hypothetical Behavior