The question is of course in what counts as “isomorphic TDT algorithms” and how do the agents figure out if that’s the case. However this post appears conclusively free of these problems.
However this post appears conclusively free of these problems.
Uh, do you mean “this post wrongly sweeps these problems under the rug”, or “this post sweeps these problems under the rug, and that’s OK”?
Anyway, although we don’t have a coding implementation of any of these decision theories, Eliezer’s description of TDT seems to keep the utility function separate from the causal network.
These problems don’t affect this post, so far as we assume the TDT agents to be suitably identical, since the games you consider are all symmetrical with respect to permutations of TDT agents, so superrationality (that TDT agents know how to apply) does the trick.
Anyway, although we don’t have a coding implementation of any of these decision theories, Eliezer’s description of TDT seems to keep the utility function separate from the causal network.
(Don’t understand what you intended to communicate by this remark.)
In retrospect, that remark doesn’t apply to multiplayer games; I was thinking of the way that in Newcomb’s Problem, the Predictor only cares what you choose and doesn’t care about your utility function, so that the only place a TDT agent’s utility function enters into its calculation there is at the very last stage, when summing over outcomes. But that’s not the case for the Prisoner’s Dilemma, it seems.
Right, for TDT agents to expect each other acting identically from the symmetry argument, we need to be able to permute not just places of TDT agents in the game, but also simultaneously places of TDT agents in TDT agents’ utility functions without changing the game, which accomodates the difference in agents’ utility functions.
The question is of course in what counts as “isomorphic TDT algorithms” and how do the agents figure out if that’s the case. However this post appears conclusively free of these problems.
Uh, do you mean “this post wrongly sweeps these problems under the rug”, or “this post sweeps these problems under the rug, and that’s OK”?
Anyway, although we don’t have a coding implementation of any of these decision theories, Eliezer’s description of TDT seems to keep the utility function separate from the causal network.
These problems don’t affect this post, so far as we assume the TDT agents to be suitably identical, since the games you consider are all symmetrical with respect to permutations of TDT agents, so superrationality (that TDT agents know how to apply) does the trick.
(Don’t understand what you intended to communicate by this remark.)
Ah, good.
In retrospect, that remark doesn’t apply to multiplayer games; I was thinking of the way that in Newcomb’s Problem, the Predictor only cares what you choose and doesn’t care about your utility function, so that the only place a TDT agent’s utility function enters into its calculation there is at the very last stage, when summing over outcomes. But that’s not the case for the Prisoner’s Dilemma, it seems.
Right, for TDT agents to expect each other acting identically from the symmetry argument, we need to be able to permute not just places of TDT agents in the game, but also simultaneously places of TDT agents in TDT agents’ utility functions without changing the game, which accomodates the difference in agents’ utility functions.