My take is that that unfortunately, I don’t expect high-level order/useful abstractions for acausal trade in full generality, and more generally suspect that in the general case, you really do need full-blown simulation that is perfectly accurate to get the benefits of acausal trade with arbitrary partners.
I’m also skeptical of boundaries even in practice working out in any non-trivial sense after we develop AI that can replace all humans at jobs, and I think boundaries IRL come about because there is no party or group that can penetrate people’s boundaries without them raising hell and damaging/destroying you, but this does not apply very much to AI/human interaction, and I’m much more skeptical of boundaries existing at an ontological level than Andrew Critich is (which is my takeaway from the Embedded Agency Sequence).
Another area where I’m skeptical is the claim that human morality is best explained by acausal trade, rather than causal trade/the amount of energy used plus the fact that humans need other humans to survive, meaning you actually need to take into account what other beings prefer.
My take is that that unfortunately, I don’t expect high-level order/useful abstractions for acausal trade in full generality
Certainly possible! Some of the stronger arguments for abstraction I have are rooted in this universe’s physics, and while I see some speculative ways to generalize them to Tegmark IV, it’s certainly possible that the things outside are not well-abstractible.
Thankfully, the point is academic, since all the acausal stuff is above our pay grade anyway.
Another area where I’m skeptical is the claim that human morality is best explained by acausal trade
Does the post argue that? My interpretation is that it draws an analogy between acausal trade and moral philosophy, not tries to explain the latter via the former. E. g.:
I’m merely saying that among humanity’s collective endeavors thus far, moral philosophy — and to some extent, theology — is what most closely resembles the process of writing down an argument that self-validates on the topic of what {{beings reflecting on what beings are supposed to do}} are supposed to do.
My take is that that unfortunately, I don’t expect high-level order/useful abstractions for acausal trade in full generality, and more generally suspect that in the general case, you really do need full-blown simulation that is perfectly accurate to get the benefits of acausal trade with arbitrary partners.
I’m also skeptical of boundaries even in practice working out in any non-trivial sense after we develop AI that can replace all humans at jobs, and I think boundaries IRL come about because there is no party or group that can penetrate people’s boundaries without them raising hell and damaging/destroying you, but this does not apply very much to AI/human interaction, and I’m much more skeptical of boundaries existing at an ontological level than Andrew Critich is (which is my takeaway from the Embedded Agency Sequence).
Another area where I’m skeptical is the claim that human morality is best explained by acausal trade, rather than causal trade/the amount of energy used plus the fact that humans need other humans to survive, meaning you actually need to take into account what other beings prefer.
Certainly possible! Some of the stronger arguments for abstraction I have are rooted in this universe’s physics, and while I see some speculative ways to generalize them to Tegmark IV, it’s certainly possible that the things outside are not well-abstractible.
Thankfully, the point is academic, since all the acausal stuff is above our pay grade anyway.
Does the post argue that? My interpretation is that it draws an analogy between acausal trade and moral philosophy, not tries to explain the latter via the former. E. g.: