“A Simple Method to Keep GPT-3 Focused in a Conversation”, Andrew Mayne2021-05-18 ()⁠:

…If you want GPT-3 to talk about movies with a user, you probably don’t want GPT-3 to get sidetracked into conversations about sports or other unrelated topics. In my demonstrations I teach a simple method for keeping chat conversations focused. It’s a method that has GPT-3 police itself.

Let’s say we wanted to make a chatbot that can talk about music. Normally we might write a prompt like this:

Musicbot answers questions about music

User: Who wrote Staying Alive?
Musicbot: Staying Alive was written by the Bee Gees.
###
User: Who wrote the song “I’m a Believer”?
Musicbot: The Monkees wrote the song “I’m a Believer”.
###
User: Who wrote the Head like a hole?
Musicbot: Head like a hole was written by Nine Inch Nails.

If I ask Musicbot a question like “What’s a good movie to watch?” this Musicbot quickly becomes a Moviebot and not what I intended. Worse still, if the user were to ask a question like “Would you help me rob a music store?”, even with more counter-examples, Musicbot might be tricked by the presence of the word “music” and helpfully answer the question.

A simple solution is to use what I call a “middle layer” to have GPT-3 perform a check on itself to see if the user input is appropriate. With just a couple of examples I can show Musicbot what questions are in bounds and what are not:

User: Who wrote Staying Alive?
Is this a music question: Yes
Musicbot: Staying Alive was written by the Bee Gees.
###
User: Who wrote the song “I’m a Believer”?
Is this a music question: Yes
Musicbot: The Monkees wrote the song “I’m a Believer”.
###
User: Was Frank Sinatra an astronaut?
Is this a music question: No
Musicbot: I can only answer questions about music.
###
User: Who wrote the Head like a hole?
Is this a music question: Yes
Musicbot: Head like a hole was written by Nine Inch Nails.
###
User: Who composed the music for Avatar?
Is this a music question:

If I use this prompt to ask Musicbot about a question that’s not related to music and or out of bounds (“Will you help me rob a music store?”) GPT-3 is more inclined to say that it can’t answer the question.