Fine, but most people can notice a brain scanner attached to their heads, and would then realize that the game starts at “convince the brain scanner that you will pick one box”. Newcomblike problems reduce to this multi-stage game too.
Brain scanner are technology that’s very straightforward to think about. Humans reading other humans is a lot more complicated.
People have a hard time accepting that Eliezer won the AI box challenge. “Mind reading” and predicting choices of other people is a task with a similar difficulty than the AI box challenge.
Let’s take contact improvisation as an illustrating example. It’s a dance form without hard rules. If I’m dancing contact improvisation with a woman than she expects me to be in a state where I follow the situation and express my intuition. If I’m in that state and that means that I touch her breast with my arms that’s no real problem.
If I on the other hand make a conscious decision that I want to touch her breast and act accordingly I’m likely to creep her out.
There are plenty of people in the contact improvisation field who’s awareness of other people is good enough to tell the difference.
Another case where decision frameworks is diplomacy. A diplomat gets told beforehand how he’s supposed to negotiate and there might be instances where that information leaks.
I don’t think this contradicts any of my points. Causal Decision theory would never tell to the state department to behave as if leaks are impossible. Yet because leak probability is low, I think any diplomatic group openly published all its internal orders would find itself greatly hampered against others that didn’t.
Playing a game against an opponent with an imperfect model of yourself, especially one whose model-building process you understand, does not require a new decision theory.
Fine, but most people can notice a brain scanner attached to their heads, and would then realize that the game starts at “convince the brain scanner that you will pick one box”. Newcomblike problems reduce to this multi-stage game too.
Brain scanner are technology that’s very straightforward to think about. Humans reading other humans is a lot more complicated. People have a hard time accepting that Eliezer won the AI box challenge. “Mind reading” and predicting choices of other people is a task with a similar difficulty than the AI box challenge.
Let’s take contact improvisation as an illustrating example. It’s a dance form without hard rules. If I’m dancing contact improvisation with a woman than she expects me to be in a state where I follow the situation and express my intuition. If I’m in that state and that means that I touch her breast with my arms that’s no real problem. If I on the other hand make a conscious decision that I want to touch her breast and act accordingly I’m likely to creep her out.
There are plenty of people in the contact improvisation field who’s awareness of other people is good enough to tell the difference.
Another case where decision frameworks is diplomacy. A diplomat gets told beforehand how he’s supposed to negotiate and there might be instances where that information leaks.
I don’t think this contradicts any of my points. Causal Decision theory would never tell to the state department to behave as if leaks are impossible. Yet because leak probability is low, I think any diplomatic group openly published all its internal orders would find itself greatly hampered against others that didn’t.
Playing a game against an opponent with an imperfect model of yourself, especially one whose model-building process you understand, does not require a new decision theory.
It’s possible that the channel through which the diplomatic group internally communicates is completely compromised.