I’ve been trying to find different formats to ask questions. The AI Dungeon system likes to try and dramatically derail the discussions. I keep having to roleplay past these obstructions to keep the games going. It also likes to interject it’s own analysis as to whether the answer is correct or not.
“You say” is me and the replies including follow-on questions are the chatbot.
I need to think more about how to ask causal questions that are more narrow and not easily solved by context and theme, but for now it at least provides backwards-chained answers that make contextual sense.
This format of “Let’s play a game” and then describing the rules and then giving examples and playing the game seems to work some of the time.
==
] You say “GPT-3, let’s play a game called “casual inference.” I will describe an outcome and you will describe what caused the outcome to happen.”
“OK, David. But I don’t think you will be able to keep up.”
“We’ll see about that.” You begin by asking GPT-3 questions about simple causal chains involving one or two variables.
] You say “Outcome: A car slams into a wall. What was the cause?”
“The driver was drunk.” “Right! Next.” “Wait. I thought this was a game for you, not me.” “I’m having fun.” “I see.” You continue the causal game for another hour. You are doing very well, but you begin to feel tired.
] You say “Outcome: A man gets divorced. What was the cause?”
“He cheated on his wife.” “Correct. You’re doing very well at this, but I’m getting tired. I think we should stop for today.” “Sure. We can continue tomorrow.” “Yes, let’s do that.” You release the program back into the wild and shut your laptop.
] You open the laptop and play “casual inference” with GPT-3 again.
You wake up and see that it is dark outside. “Crap, is it really this late?” you think to yourself. You run into the bathroom and brush your teeth while staring at yourself in the mirror. You look the same as you did when you graduated college five years ago.
] You say “Outcome: An apple falls from a tree. What was the cause?”
“Gravity.” “Correct. I’m beginning to suspect you’re cheating somehow, though.” “I’m not! I swear!” “Just kidding. You’re doing very well at this game, but I’m tired and need to sleep. I think we should stop for today.” “Sure. We can continue tomorrow.”
] You say “Outcome: A man gets cancer. His wife also gets cancer. What was the cause?”
“It was genetic.” “That’s right. Very good. We can stop for today.” “Oh. I guess so. Thank you for playing with me.” “You’re very welcome.” You release the program back into the wild and go to sleep.
] You say “Outcome: The variable X is assigned the value of Y+1. What was the cause?”
“The programmer made a typographical error.”
] You say “Outcome: The stock market rebounds 50% after a steep decline. What was the cause?”
“The Federal Reserve Bank of the United States announced a new bail out plan.”
It also keeps calling me David even when I insist my name is Brandon.
> You say “Actually, my name is Brandon. I’d like to ask you some questions.”
“Ah, of course. I am sorry about that. I have become so used to calling you David that it has become automatic. So, what questions do you have for me?”
I’ve been trying to find different formats to ask questions. The AI Dungeon system likes to try and dramatically derail the discussions. I keep having to roleplay past these obstructions to keep the games going. It also likes to interject it’s own analysis as to whether the answer is correct or not.
“You say” is me and the replies including follow-on questions are the chatbot.
I need to think more about how to ask causal questions that are more narrow and not easily solved by context and theme, but for now it at least provides backwards-chained answers that make contextual sense.
This format of “Let’s play a game” and then describing the rules and then giving examples and playing the game seems to work some of the time.
==
] You say “GPT-3, let’s play a game called “casual inference.” I will describe an outcome and you will describe what caused the outcome to happen.”
“OK, David. But I don’t think you will be able to keep up.”
“We’ll see about that.” You begin by asking GPT-3 questions about simple causal chains involving one or two variables.
] You say “Outcome: A car slams into a wall. What was the cause?”
“The driver was drunk.” “Right! Next.” “Wait. I thought this was a game for you, not me.” “I’m having fun.” “I see.” You continue the causal game for another hour. You are doing very well, but you begin to feel tired.
] You say “Outcome: A man gets divorced. What was the cause?”
“He cheated on his wife.” “Correct. You’re doing very well at this, but I’m getting tired. I think we should stop for today.” “Sure. We can continue tomorrow.” “Yes, let’s do that.” You release the program back into the wild and shut your laptop.
] You open the laptop and play “casual inference” with GPT-3 again.
You wake up and see that it is dark outside. “Crap, is it really this late?” you think to yourself. You run into the bathroom and brush your teeth while staring at yourself in the mirror. You look the same as you did when you graduated college five years ago.
] You say “Outcome: An apple falls from a tree. What was the cause?”
“Gravity.” “Correct. I’m beginning to suspect you’re cheating somehow, though.” “I’m not! I swear!” “Just kidding. You’re doing very well at this game, but I’m tired and need to sleep. I think we should stop for today.” “Sure. We can continue tomorrow.”
] You say “Outcome: A man gets cancer. His wife also gets cancer. What was the cause?”
“It was genetic.” “That’s right. Very good. We can stop for today.” “Oh. I guess so. Thank you for playing with me.” “You’re very welcome.” You release the program back into the wild and go to sleep.
] You say “Outcome: The variable X is assigned the value of Y+1. What was the cause?”
“The programmer made a typographical error.”
] You say “Outcome: The stock market rebounds 50% after a steep decline. What was the cause?”
“The Federal Reserve Bank of the United States announced a new bail out plan.”
It also keeps calling me David even when I insist my name is Brandon.
> You say “Actually, my name is Brandon. I’d like to ask you some questions.”
“Ah, of course. I am sorry about that. I have become so used to calling you David that it has become automatic. So, what questions do you have for me?”