The Tuesday-creature might believe that its decision is correlated with the Monday-creature. [...] If the correlation is strong enough and stopping values change is expensive, then the Tuesday-creature is best served by being kind to its Wednesday-self, and helping to put it in a good position to realize whatever its goals may be.
The Tuesday-creature might believe that its decision is correlated with the Monday-creature’s predictions about what the Tuesday-creature would do. [...] If the Monday-creature is a good enough predictor of the Tuesday-creature, then the Tuesday-creature is best served by at least “paying back” the Monday-creature for all of the preparation the Monday-creature did
These both seem like very UDT-style arguments, that wouldn’t apply to a naive EDT:er once they’d learned how helpful the Monday creature was?
So based on the rest of this post, I would have expected these motivations to only apply if either (i) the Tuesday-creature was uncertain about whether the Monday-creature had been helpful or not, or (ii) the Tuesday creature cared about not-apparently-real-worlds to a sufficient extent (including because they might think they’re in a simulation). Curious if you disagree with that.
Yes, I think this kind of cooperation would only work for UDT agents (or agents who are uncertain about whether they are in someone’s imagination or whatever).
A reader who isn’t sympathetic to UDT can just eliminate the whole passage “But there are still options: …”, it’s not essential to the point of the post. It only serves to head off the prospect of a UDT-advocate arguing that the agent is being unreasonable by working at cross-purposes to itself (and I should have put this whole discussion in an appendix, or at least much better sign-posted what was going on).
These both seem like very UDT-style arguments, that wouldn’t apply to a naive EDT:er once they’d learned how helpful the Monday creature was?
So based on the rest of this post, I would have expected these motivations to only apply if either (i) the Tuesday-creature was uncertain about whether the Monday-creature had been helpful or not, or (ii) the Tuesday creature cared about not-apparently-real-worlds to a sufficient extent (including because they might think they’re in a simulation). Curious if you disagree with that.
Yes, I think this kind of cooperation would only work for UDT agents (or agents who are uncertain about whether they are in someone’s imagination or whatever).
A reader who isn’t sympathetic to UDT can just eliminate the whole passage “But there are still options: …”, it’s not essential to the point of the post. It only serves to head off the prospect of a UDT-advocate arguing that the agent is being unreasonable by working at cross-purposes to itself (and I should have put this whole discussion in an appendix, or at least much better sign-posted what was going on).