Good point, acausal trade can at least ameliorate the problem, pushing towards atomic alignment. However, we understand acausal trade too poorly to be highly confident it will work. And, “making acausal trade work” might in itself be considered outside of the desiderata of atomic alignment (since it involves multiple AIs). Moreover, there are also actors that have a very low probability of becoming TAI users but whose support is beneficial for TAI projects (e.g. small donors). Since they have no counterfactual AI to bargain on their behalf, it is less likely acausal trade works here.
Yeah, I basically hope that enough people care about enough other people that some of the wealth ends up trickling down to everyone. Win probability is basically interchangeable with other people caring about you and your ressources across the multiverse. Good thing the cosmos is so large.
I don’t think making acausal trade work is that hard. All that is required is:
That the winner cares about the counterfactual versions of himself that didn’t win, or equivalently, is unsure whether they’re being simulated by another winner. (huh, one could actually impact this through memetic work today, though messing with people’s preferences like that doesn’t sound friendly)
That they think to simulate alternate winners before they expand too far to be simulated.
Good point, acausal trade can at least ameliorate the problem, pushing towards atomic alignment. However, we understand acausal trade too poorly to be highly confident it will work. And, “making acausal trade work” might in itself be considered outside of the desiderata of atomic alignment (since it involves multiple AIs). Moreover, there are also actors that have a very low probability of becoming TAI users but whose support is beneficial for TAI projects (e.g. small donors). Since they have no counterfactual AI to bargain on their behalf, it is less likely acausal trade works here.
Yeah, I basically hope that enough people care about enough other people that some of the wealth ends up trickling down to everyone. Win probability is basically interchangeable with other people caring about you and your ressources across the multiverse. Good thing the cosmos is so large.
I don’t think making acausal trade work is that hard. All that is required is:
That the winner cares about the counterfactual versions of himself that didn’t win, or equivalently, is unsure whether they’re being simulated by another winner. (huh, one could actually impact this through memetic work today, though messing with people’s preferences like that doesn’t sound friendly)
That they think to simulate alternate winners before they expand too far to be simulated.