Player 2 is part of Player 1′s environment. Player 1 calculates their actions in the same way as they calculate the response of the rest of the environment to their actions—by using their model of how the rest of the world behaves.
OK, but typically we’re given no information about how Player 2 thinks and simply told what their utility function is. In other words, our ‘model’ of Player 2 just says “here is a rational actor who likes these outcomes this much.”
Now if we can use that ‘model’ of Player 2 to work out what they’re going to do, then of course we’re in great shape, but that just means we’re solving Player 2′s decision problem. So in order to use ‘NDT’ we first need (possibly some other) decision theory to predict Player 2′s action.
Agents may have even less information about other aspects of the world—and may be in an even worse position to make predictions about them. Basically agents have to decide what to do in the face of considerable uncertainty.
Anyway, this doesn’t seem like a problem with conventional decision theory to me.
Of course it’s not—all that’s happened is that I’ve described a certain procedure that bears a very tenuous relation to ‘decision theory’, and noted that this procedure is unable to do certain things.
Player 2 is part of Player 1′s environment. Player 1 calculates their actions in the same way as they calculate the response of the rest of the environment to their actions—by using their model of how the rest of the world behaves.
OK, but typically we’re given no information about how Player 2 thinks and simply told what their utility function is. In other words, our ‘model’ of Player 2 just says “here is a rational actor who likes these outcomes this much.”
Now if we can use that ‘model’ of Player 2 to work out what they’re going to do, then of course we’re in great shape, but that just means we’re solving Player 2′s decision problem. So in order to use ‘NDT’ we first need (possibly some other) decision theory to predict Player 2′s action.
Agents may have even less information about other aspects of the world—and may be in an even worse position to make predictions about them. Basically agents have to decide what to do in the face of considerable uncertainty.
Anyway, this doesn’t seem like a problem with conventional decision theory to me.
Of course it’s not—all that’s happened is that I’ve described a certain procedure that bears a very tenuous relation to ‘decision theory’, and noted that this procedure is unable to do certain things.