The way I’d phrase it[1] is that the set of all acausal deals made by every civilization with every other civilization potentially has an abstract hierarchical structure, same way everything else does. Meaning there are commonly reoccurring low-level patterns and robust emergent high-level dynamics, and you can figure those out (and start following them) without actually explicitly running full-fidelity simulations of all these other civilizations. Doing so would then in-expectation yield you a fair percentage of the information you’d get from running said full-fidelity simulations.
This is similar to e. g. how we can use the abstractions of “government”, “culture”, “society” and “economy” to predict the behavior of humans on Earth, without running full-fidelity simulations of each individual person, and how this lets us mostly correctly predict the rough shape of all of their behaviors.
I think it’s on-its-face plausible that the acausal “society” is the same. There are some reasons to think there are convergently reoccurring dynamics (see the boundaries discussion), the space of acausal deals/Tegmark IV probably has a sort of “landscape”/high-level order to it, etc.
(Another frame: instead of running individual full-fidelity simulations of every individual civilization you’re dealing with, you can run a coarse-grained/approximate simulation of the entirety of Tegmark IV, and then use just that to figure out roughly what sorts of deals you should be making.)
Or maybe this is a completely different idea/misinterpretation of the post. I’ve read it years ago and only skimmed it now, I may be misremembering. Sorry if so.
My take is that that unfortunately, I don’t expect high-level order/useful abstractions for acausal trade in full generality, and more generally suspect that in the general case, you really do need full-blown simulation that is perfectly accurate to get the benefits of acausal trade with arbitrary partners.
I’m also skeptical of boundaries even in practice working out in any non-trivial sense after we develop AI that can replace all humans at jobs, and I think boundaries IRL come about because there is no party or group that can penetrate people’s boundaries without them raising hell and damaging/destroying you, but this does not apply very much to AI/human interaction, and I’m much more skeptical of boundaries existing at an ontological level than Andrew Critich is (which is my takeaway from the Embedded Agency Sequence).
Another area where I’m skeptical is the claim that human morality is best explained by acausal trade, rather than causal trade/the amount of energy used plus the fact that humans need other humans to survive, meaning you actually need to take into account what other beings prefer.
My take is that that unfortunately, I don’t expect high-level order/useful abstractions for acausal trade in full generality
Certainly possible! Some of the stronger arguments for abstraction I have are rooted in this universe’s physics, and while I see some speculative ways to generalize them to Tegmark IV, it’s certainly possible that the things outside are not well-abstractible.
Thankfully, the point is academic, since all the acausal stuff is above our pay grade anyway.
Another area where I’m skeptical is the claim that human morality is best explained by acausal trade
Does the post argue that? My interpretation is that it draws an analogy between acausal trade and moral philosophy, not tries to explain the latter via the former. E. g.:
I’m merely saying that among humanity’s collective endeavors thus far, moral philosophy — and to some extent, theology — is what most closely resembles the process of writing down an argument that self-validates on the topic of what {{beings reflecting on what beings are supposed to do}} are supposed to do.
The way I’d phrase it[1] is that the set of all acausal deals made by every civilization with every other civilization potentially has an abstract hierarchical structure, same way everything else does. Meaning there are commonly reoccurring low-level patterns and robust emergent high-level dynamics, and you can figure those out (and start following them) without actually explicitly running full-fidelity simulations of all these other civilizations. Doing so would then in-expectation yield you a fair percentage of the information you’d get from running said full-fidelity simulations.
This is similar to e. g. how we can use the abstractions of “government”, “culture”, “society” and “economy” to predict the behavior of humans on Earth, without running full-fidelity simulations of each individual person, and how this lets us mostly correctly predict the rough shape of all of their behaviors.
I think it’s on-its-face plausible that the acausal “society” is the same. There are some reasons to think there are convergently reoccurring dynamics (see the boundaries discussion), the space of acausal deals/Tegmark IV probably has a sort of “landscape”/high-level order to it, etc.
(Another frame: instead of running individual full-fidelity simulations of every individual civilization you’re dealing with, you can run a coarse-grained/approximate simulation of the entirety of Tegmark IV, and then use just that to figure out roughly what sorts of deals you should be making.)
Or maybe this is a completely different idea/misinterpretation of the post. I’ve read it years ago and only skimmed it now, I may be misremembering. Sorry if so.
My take is that that unfortunately, I don’t expect high-level order/useful abstractions for acausal trade in full generality, and more generally suspect that in the general case, you really do need full-blown simulation that is perfectly accurate to get the benefits of acausal trade with arbitrary partners.
I’m also skeptical of boundaries even in practice working out in any non-trivial sense after we develop AI that can replace all humans at jobs, and I think boundaries IRL come about because there is no party or group that can penetrate people’s boundaries without them raising hell and damaging/destroying you, but this does not apply very much to AI/human interaction, and I’m much more skeptical of boundaries existing at an ontological level than Andrew Critich is (which is my takeaway from the Embedded Agency Sequence).
Another area where I’m skeptical is the claim that human morality is best explained by acausal trade, rather than causal trade/the amount of energy used plus the fact that humans need other humans to survive, meaning you actually need to take into account what other beings prefer.
Certainly possible! Some of the stronger arguments for abstraction I have are rooted in this universe’s physics, and while I see some speculative ways to generalize them to Tegmark IV, it’s certainly possible that the things outside are not well-abstractible.
Thankfully, the point is academic, since all the acausal stuff is above our pay grade anyway.
Does the post argue that? My interpretation is that it draws an analogy between acausal trade and moral philosophy, not tries to explain the latter via the former. E. g.: