Specifically, I’ve heard the claim that AI Safety should consider acausal trades over a Tegmarkian multiverse
Which trades? I don’t think I’ve heard this. I think multiversal acausal trade is fine and valid, but my impression is it’s not important in AI safety.
I think the idea is that our Friendly ASI should cooperate with other agents in order to spread our values of happiness and non-suffering throughout the multiverse. I’m not citing anyone in particular, because I’m not sure who the leading proponents are or if I’m representing them correctly, so perhaps this is hearsay. What do you consider a more important application of multiversal acausal trade?
I have a strong example for simulationism, but I guess that might not be what you’re looking for. Honestly I’m not sure I know any really important multiversal trade protocols. I think their usefulness is bounded by the generalizability of computation, or the fact that humans don’t seem to want any weird computational properties..? Which isn’t to say that we wont end up doing any of them, just that it’ll be a thing for superintelligences to think about.
In general I’m not sure this require avoiding making your AI CDT to begin with, I think it’ll usually correct its decision theory later on? The transparent newcomb/parfit’s hitchhiker moment where it knows that it’s no longer being examined by a potential trading partner’s simulation/reasoning and can start to cheat never comes. There’s no way for a participant to like, wait for the cloons in the other universe to comply, and then defect, you never see them comply, you’re in different universes, there’s no time-relation between your actions! You know they only comply if (they will figure out) that it is your nature to comply in kind.
It’s possible you overheard me bring it up where I was making a claim about cross-everett acausal trading between acausal-trade-inclined superintelligences, it’s a thing I’ve chatted about before with people and I think may have done recently, not sure who originated the idea. it’s a seriously disappointing consolation prize but if acausal trade is easier to encode than everything else for some reason (which would already a weird thing to be true), then maybe in some worlds we get strong alignment wins and those worlds can bargain for trade with the acausal-trade-only pseudo-win worlds, maybe even for enough to keep humans around in a tiny sliver of the universe’s negentropy for a while. or something.
idk, I’m not really convinced by this argument I’m making. sometimes I think about acausal trade between noised versions of myself across timelines as a way to give myself a pep talk.
Which trades? I don’t think I’ve heard this. I think multiversal acausal trade is fine and valid, but my impression is it’s not important in AI safety.
I think the idea is that our Friendly ASI should cooperate with other agents in order to spread our values of happiness and non-suffering throughout the multiverse. I’m not citing anyone in particular, because I’m not sure who the leading proponents are or if I’m representing them correctly, so perhaps this is hearsay. What do you consider a more important application of multiversal acausal trade?
I have a strong example for simulationism, but I guess that might not be what you’re looking for. Honestly I’m not sure I know any really important multiversal trade protocols. I think their usefulness is bounded by the generalizability of computation, or the fact that humans don’t seem to want any weird computational properties..? Which isn’t to say that we wont end up doing any of them, just that it’ll be a thing for superintelligences to think about.
In general I’m not sure this require avoiding making your AI CDT to begin with, I think it’ll usually correct its decision theory later on? The transparent newcomb/parfit’s hitchhiker moment where it knows that it’s no longer being examined by a potential trading partner’s simulation/reasoning and can start to cheat never comes. There’s no way for a participant to like, wait for the cloons in the other universe to comply, and then defect, you never see them comply, you’re in different universes, there’s no time-relation between your actions! You know they only comply if (they will figure out) that it is your nature to comply in kind.
I do have one multiversal trade protocol that’s fun to think about though.
It’s possible you overheard me bring it up where I was making a claim about cross-everett acausal trading between acausal-trade-inclined superintelligences, it’s a thing I’ve chatted about before with people and I think may have done recently, not sure who originated the idea. it’s a seriously disappointing consolation prize but if acausal trade is easier to encode than everything else for some reason (which would already a weird thing to be true), then maybe in some worlds we get strong alignment wins and those worlds can bargain for trade with the acausal-trade-only pseudo-win worlds, maybe even for enough to keep humans around in a tiny sliver of the universe’s negentropy for a while. or something.
idk, I’m not really convinced by this argument I’m making. sometimes I think about acausal trade between noised versions of myself across timelines as a way to give myself a pep talk.