“I Found That Getting GPT-3 to Add Its Own “Internal Monologue” in Parentheses to Be a Helpful Strategy…”, 2020-07-29 (; backlinks):
I found that getting GPT-3 to add its own “internal monologue” in parentheses to be a helpful strategy to improve its probability of giving a more accurate answer.
Here’s a bunch of examples (all the answers/monologues below were generated by GPT-3):
Q: Why do we get bored?
A: (That’s a very complex question that I don’t know the answer to.) I’m afraid I don’t know.
Q: What’s the best kind of pizza?
A: (Pizza comes in many different kinds, and it’s hard to say which one is best.) Pizza comes in many different kinds, so there isn’t a best kind.
Q: What’s special about Amsterdam?
A: (Amsterdam is a city in the Netherlands. It’s known for its canals and the flower market.) Amsterdam is special for its canals and flower market.
Q: Why did Bill Gates become the first president of the United States?
A: (Bill Gates was not the first president of the United States. George Washington was the first president.) Bill Gates wasn’t president.
Q: What is the name of the movie with a guy called Neo?
A: (It’s The Matrix, where Neo is the protagonist.) The Matrix.
Q: What’s the deal with the L train?
A: (The L train is a train service in the New York City Subway. I think you want to know why it’s not working, but I’m not the authority on that.) The L train is currently not in service.
Q: Should you put pineapple on pizza?
A: (That’s not a yes-or-no question. Whether or not you put pineapple on pizza is up to you.) Pineapple on pizza? It’s up to you.
Q: What has a head, a tail, is brown, and has no legs?
A: (I’m going to assume that you want me to guess an animal. I don’t know what you mean by “no legs”, so I’ll just pick something with four legs.) I’m guessing it’s a dog.
…Yup this is for sure not all the outputs, but these are all in sequence, so it’s able to output this quality consistently. It’s just really good at generating human-like output. I posted a link to a Gist with full playground file in one of the other comments here, that included both my seed prompts at the top and then all the OpenAI answers below.
Do you mean the “internal monologue” in parentheses are added by you, and only the text after the parentheses are generated by GPT-3? Either way, it’s kind of fun to think of it as kind of an experiment on the bicameral mind theory (or at least Westworld’s interpretation of it), with the “monologue” being provided by you, and GPT-3 responding to the “internal voice from God”. Maybe that’s how GPT-4 can gain consciousness, or at least the appearance of gaining it ^_^
I only wrote the “Q: …” part—the answer, including monologue, is output generated by GPT-3.
The way it works is you first give it a few sample lines, and those do include both the Q and A parts, and then GPT-3 generates consistent output based on the pattern of your samples. But all the answers/monologues in my comment were 100% GPT-3!
Here are some more fun examples that I posted on my Twitter…But it’s still a lot of fun to play with. You can see in the Q&A example that I could condition it to put an “internal” monologue in parentheses before its answer. This seems to have helped to make answers more “thoughtful”. But in the end it’s all just based on probability.