I haven’t heard of ECL before, so I’m sorry if this comes off as naive, but I’m getting stuck on the intro.
For one, I assume that you care about what happens outside our light cone. But more strongly, I’m looking at values with the following property: If you could have a sufficiently large impact outside our lightcone, then the value of taking different actions would be dominated by the impact that those actions had outside our lightcone.
The laws of physics as we know them state that we cannot have any impact outside our light cone. Does ECL (or this post) require this to be wrong?
From the summary you linked,
Many ethical theories (in particular most versions of consequentialism) do not consider geographical distance of relevance to moral value. After all, suffering and the frustration of one’s preferences is bad for someone regardless of where (or when) it happens. This principle should apply even when we consider worlds so far away from us that we can never receive any information from there.
...
Multiverse-wide cooperation via superrationality (abbreviation: MSR) is the idea that, if I think about different value systems and their respective priorities in the world, I should not work on the highest priority according to my own values, but on whatever my comparative advantage is amongst all the interventions favored by the value systems of agents interested in multiverse-wide cooperation.
Is the claim (loosely) that we should take actions we think are morally inferior according to us because … there might be other intelligent beings outside of our light cone with different preferences? I would want them to act a little bit more like me, so in turn I will act a little more like them, in a strange game of blind prisoner’s dilemma.
This is obviously hogwash to me, so I want to make sure I understand it before proceeding.
You might want to check out the paper and summary that explains ECL, that I linked. In particular, this section of the summary has a very brief introduction to non-causal decision theory, and motivating evidential decision theory is a significant focus in the first couple of sections of the paper.
There are several other use cases where cooperation across multiple universes could be beneficial:
1. Resurrection of the Dead: When generating a random mind file, the likelihood of it being a copy of a deceased person is extremely low. More often than not, it will either be noise or a copy of some other individual. However, for any given person, there exists a universe where they have just passed away. Therefore, we could collaborate with these alternate universes: we resurrect their deceased, and they resurrect ours.
2. Alleviating Past Suffering and S Risks: This concept is similar to the one above but is applied to any moment of suffering experienced by an observer. This is contingent on certain assumptions about the nature of personal identity and theories of indexical uncertainty.
I haven’t heard of ECL before, so I’m sorry if this comes off as naive, but I’m getting stuck on the intro.
The laws of physics as we know them state that we cannot have any impact outside our light cone. Does ECL (or this post) require this to be wrong?
From the summary you linked,
Is the claim (loosely) that we should take actions we think are morally inferior according to us because … there might be other intelligent beings outside of our light cone with different preferences? I would want them to act a little bit more like me, so in turn I will act a little more like them, in a strange game of blind prisoner’s dilemma.
This is obviously hogwash to me, so I want to make sure I understand it before proceeding.
You might want to check out the paper and summary that explains ECL, that I linked. In particular, this section of the summary has a very brief introduction to non-causal decision theory, and motivating evidential decision theory is a significant focus in the first couple of sections of the paper.
I read the summary post, and skimmed the paper. The summary given in the grandparent seems to be basically accurate.
There are several other use cases where cooperation across multiple universes could be beneficial:
1. Resurrection of the Dead: When generating a random mind file, the likelihood of it being a copy of a deceased person is extremely low. More often than not, it will either be noise or a copy of some other individual. However, for any given person, there exists a universe where they have just passed away. Therefore, we could collaborate with these alternate universes: we resurrect their deceased, and they resurrect ours.
2. Alleviating Past Suffering and S Risks: This concept is similar to the one above but is applied to any moment of suffering experienced by an observer. This is contingent on certain assumptions about the nature of personal identity and theories of indexical uncertainty.