with some weird caveats: for example, if X started out as CDT, its modification will only care about other agents’ decisions made after X self-modified
I’m guessing what matters is not so much time as the causal dependence of those decisions made by other agents on the physical event of the decision of X to self-modify. So the improved X still won’t care about its influence on future decisions made by other agents for reasons other than X having self-modified. For example, take the (future) decisions of other agents that are space-like separated from X’s self-modification.
Even more strangely, I’m guessing agents that took a snapshot of X just before self-modification, able to predict its future actions, will be treated by X differently depending on whether they respond to observations of the physical events caused by the behavior of improved X, or to the (identical) inferences made based on the earlier snapshot.
Do you understand this effect well enough to rule its statement “obviously correct”? I’m not that sure it’s true, it’s something built out of an intuitive model of “what CDT cares about”, not technical understanding, so I would be interested in an explanation that is easier for me to read… (See another example in the updated version of grandparent.)
Right, we don’t really understand it yet, but on an informal level it appears valid. I think it’s worth mentioning at a basic level, though it deserves a fuller discussion. Good example.
I’m guessing what matters is not so much time as the causal dependence of those decisions made by other agents on the physical event of the decision of X to self-modify. So the improved X still won’t care about its influence on future decisions made by other agents for reasons other than X having self-modified. For example, take the (future) decisions of other agents that are space-like separated from X’s self-modification.
Even more strangely, I’m guessing agents that took a snapshot of X just before self-modification, able to predict its future actions, will be treated by X differently depending on whether they respond to observations of the physical events caused by the behavior of improved X, or to the (identical) inferences made based on the earlier snapshot.
Correct, of course. I sacrificed a little accuracy for the sake of being easier for a novice to read; is there a sentence that would optimize both?
Do you understand this effect well enough to rule its statement “obviously correct”? I’m not that sure it’s true, it’s something built out of an intuitive model of “what CDT cares about”, not technical understanding, so I would be interested in an explanation that is easier for me to read… (See another example in the updated version of grandparent.)
Right, we don’t really understand it yet, but on an informal level it appears valid. I think it’s worth mentioning at a basic level, though it deserves a fuller discussion. Good example.