A core element is that you expect acausal trade among far more intelligent agents, such as AGI or even ASI. As well that they’ll be using approximations.
Problem 1: There isn’t going to be much Darwinian selection pressure against a civilization that can rearrange stars and terraform planets. I’m of the opinion that it has mostly stopped mattering now, and will only matter even less over time. As long as we don’t end up in a “everyone has an AI and competes in a race to the bottom”.
I don’t think it is that odd that an ASI could resist selection pressures. It operates on a faster time-scale and can apply more intelligent optimization than evolution can, towards the goal of keeping itself and whatever civilization it manages stable.
Problem 2:
I find it somewhat plausible there’s some nicely sufficiently pinned down variables that can get us to a more objective measure. However, I don’t think it is needed and most presentations of this don’t go for an objective distribution.
So, to me, using a UTM that is informed by our own physics and reality is fine. This presumably results in more of a ‘trading nearby’ sense, the typical example being across branches, but in more generality. You have more information about how those nearby universes look anyway.
The downside here is that whatever true distribution there is, you’re not trading directly against it. But if it is too hard for an ASI in our universe to manage, then presumably many agents aren’t managing to acausally trade against the true distribution regardless.
A core element is that you expect acausal trade among far more intelligent agents, such as AGI or even ASI. As well that they’ll be using approximations.
Problem 1: There isn’t going to be much Darwinian selection pressure against a civilization that can rearrange stars and terraform planets. I’m of the opinion that it has mostly stopped mattering now, and will only matter even less over time. As long as we don’t end up in a “everyone has an AI and competes in a race to the bottom”. I don’t think it is that odd that an ASI could resist selection pressures. It operates on a faster time-scale and can apply more intelligent optimization than evolution can, towards the goal of keeping itself and whatever civilization it manages stable.
Problem 2: I find it somewhat plausible there’s some nicely sufficiently pinned down variables that can get us to a more objective measure. However, I don’t think it is needed and most presentations of this don’t go for an objective distribution.
So, to me, using a UTM that is informed by our own physics and reality is fine. This presumably results in more of a ‘trading nearby’ sense, the typical example being across branches, but in more generality. You have more information about how those nearby universes look anyway.
The downside here is that whatever true distribution there is, you’re not trading directly against it. But if it is too hard for an ASI in our universe to manage, then presumably many agents aren’t managing to acausally trade against the true distribution regardless.