[Edit: Don’t bother responding to this yet. I need to think this through.]
How do you play “cooperate iff (the opponent cooperates iff I cooperate)” in a GLT?
I’m not sure this question makes sense. Can you give an example?
Does S compute the programmer’s decision using S’s knowledge or only the programmer’s knowledge?
S should take the programmer R’s prior and memories/sensory data at the time of coding, and compute a posterior probability distribution using them (assuming it would do a better job at this than R). Then use that to compute R’s expected utility for the purpose of computing the optimal GLT. This falls out of the idea that S is trying to approximate what the GLT would be if R had logical omniscience.
Is the programmer supposed to be modeling the opponent AI in sufficient resolution to guess how much the AI knows about the programmer?
No, S will do it.
Does S compute the opponent as if it were modeling only the programmer, or both the programmer and S?
I guess both, but I don’t understand the significance of this question.
[Edit: Don’t bother responding to this yet. I need to think this through.]
I’m not sure this question makes sense. Can you give an example?
S should take the programmer R’s prior and memories/sensory data at the time of coding, and compute a posterior probability distribution using them (assuming it would do a better job at this than R). Then use that to compute R’s expected utility for the purpose of computing the optimal GLT. This falls out of the idea that S is trying to approximate what the GLT would be if R had logical omniscience.
No, S will do it.
I guess both, but I don’t understand the significance of this question.