Inequality Penalty: Morality in Many Worlds

This is a summary of Sean Carroll’s musings on when it would matter which model of apparent collapse is more accurate.

Executive Summary: If you care about equality, or even are risk-averse, then your decisions depend on whether you subscribe to the Many Worlds model.

Disclaimer: I am personally MWI-agnostic, the universe is generally weirder than we can conceive, and resolution of this particular question has been evading us for over 65 years. So whatever the next insight is, odds are, it will come bundled with a new unexpected paradigm. However, Sean Carroll is very much pro-MWI, and his reasons to like it are very sensible, though he readily admits that new evidence could come up that would refute this particular belief.

Here is a revelation moment for him from his Mindscape podcast episode with a philosopher Lara Buchak:

0:55:39.7 LB: It’s like there are a hundred future possible Seans. What would you rather giving all the future possible Seans a million dollars or giving 98 of them a million dollars and giving one of them nothing and one of them $20 million? And whereas the expected utility theorists will say there’s a unique answer to that question about how Sean should value his future possible Seans. You should give them all equal weight in decision making. I say, no, actually it’s up to you. If you want to put more weight on how things go for worst off possible Sean, that’s a reasonable way to take the means to your ends. That’s a reasonable way to sort of like cash out the maximum of I’m trying to get what I want. On the other hand, if you, as I guess you do put a lot of weight on best off future possible Sean, that’s also a reasonable thing to do. In either case, you only have one life to live. Only one of these guys is going to be actual Sean. So it’s up to you to think about how much weight to put on each of their interests knowing that only one of them will be actual.

0:57:01.0 SC: You know, it only now dawns on me, this is very embarrassing, but I have to think about this in the context of the many worlds interpretation of quantum mechanics, which I’m kind of a proponent of. So the whole point of many worlds is that what we think of as probabilities really are actualities well, quantum probabilities, not every old probabilities. But if we did our choice making via some quantum random number generator, then yeah, I’ve always taken the line, this might be a life changing moment for me because I’ve always taken the line that there’s no difference in how we think ethically or morally in many worlds versus just a truly stochastic single world.

I think a way to sum this up is: for some people there is a moral (or at least emotional) difference between taking a 1% chance of getting $20M, a 1% chance of getting nothing, and a 98% chance of getting $1M, and actually creating 100 copies of oneself, one of which got nothing knowing that there is another luckier version of them who got almost everything, without having to work for it.

Here are some musings from a recent AMA where this question is revisited (rather long):

0:30:15.9 SC: Janice Oyanusfunk says, in Episode 220 with Lara Buchak when considering from a many world’s perspective, whether you would rather give 100 future possible Seans a million dollars or give 98 of them a million dollars and giving one of them nothing? Sorry, giving… Oh yes, give 98 of them a million, giving one of them nothing and one of them 20 million. You seem to suggest that these different versions of Seans need to be treated like a hundred strangers. While I agree that you are not the same person as the Seans in other branches, all these possible Seans will remember having made that decision for themselves. Don’t you think their complicity in the decision changes the moral situation compared to a scenario where you get to distribute money among non-complicit strangers?

0:31:03.1 SC: So I’m not exactly sure what to say here. I mean, I think you’re on to something, but I’m not quite sure that it matters in this case. I might be misunderstanding or misreading here, so let me just say you what my thoughts are. So again, just to be clear ’cause maybe I read it a little bit too quickly or awkwardly. We’re trying to decide between two different ways of distributing money, okay? You have 100 people, give a million dollars each. That’s one way of doing it. The other way is you have 100 people, give 98 of them a million, one of them zero and one of them 20 million, so there’s more being given away in the second scheme, but it’s a little bit more unequal, a little bit less fair, right? ’Cause someone’s gonna get nothing. And the question is, that I’m treating the different versions of myself like strangers and I think that the complicity in the decision changes the moral situation. So I’ll absolutely confess, I forget what I said in real time in the episode, so they’re not… I don’t think that strangers is the right way to put it, so I’m just gonna try to say two things now, I’m not gonna necessarily try to fix what I said then.

0:32:15.4 SC: There are different people, and there are people who will never talk to each other, but you’re certainly right, and that they share memories, right? So the decision that was made that they need to live with the consequences of is absolutely a decision that they made. That’s very true. So, if the question is, does it matter whether one makes a decision for oneself or for others, in principle, yeah, it absolutely could. I don’t think it does very much in this case, so if you… Because look, I don’t think that the many worlds thing matters that much in this kind of analysis. I think many worlds is just a distraction. Just think of it in terms of probabilities, and I think it’s exactly the same analysis, whatever that analysis is. Okay? So if you say 98 people get a million dollars, one gets 20 million, one gets zero, to me, that’s exactly equivalent to saying there is a 98% chance that I will get a million dollars, a 1% chance I get nothing, and a 1% chance I get 20 million.

0:33:22.6 SC: Whatever the answer is, in one of those cases, it’s the same in the other one. And… I forget what I said. I think that I would… I really don’t know, I can see arguments for either way, I’m probably gonna go for the 20 million that the 1% chance of the 20 million. I hope I’m consistent in what I said, but yeah, maybe not. Maybe I’ve updated my beliefs. A guaranteed one million is nice, but a 1% chance of winning 20 million versus 1% chance of zero, maybe I go for the 20 million. If I were destitute and poor, maybe I would feel very differently about that, okay? So, certainly in those kinds of questions, I think that if one has the chance to give the people who are getting the reward, the ability to choose, rather than me doing the choosing, then yes, you should do that. You should listen to what the people want. So, I guess… And this is one of Lara’s points is that it is absolutely okay that different kinds of people have different risk tolerances.

0:34:25.0 SC: So, the point about the question, 100% chance of 1 million versus 98% chance of a million, 1% chance of 20%, one chance of zero… By the way, you could also contrast that with, forget about the people who get a million, they’re all just the same, 100% chance of getting a million versus 50% chance of getting 20 and 50% of getting zero, right? That’s another comparison you could do. But anyway, Lara’s point is, it’s okay to have different risk tolerances about this. There’s not a one unique answer to which you should prefer on the basis of rational choice theory. It is okay to say my preference is, not to risk it and go for the 100% guarantee of a million. It is also okay to say, let those dice roll and give me the 50-50 chance of 20 million versus zero. So therefore, yes, if I interpret the question is saying, does it matter that you give people their choice about which bargain to accept?

0:35:36.2 SC: Yes, it does matter a lot, because you know what their… They know what their preferences are. In the case of me doing it with my future selves in the multiverse, then I am doing it, and so that’s okay. So, I don’t think that any of the future selves would have any right to complain, that’s the bottom line, right? As long as I’m making the choice now, there’s 100 future selves have to live with the consequences, none of them has a right to complain. And it’s exactly the same with a hundred real ones in the multiverse versus a 1% chance of a hypothetical one in a single universe with truly stochastic choices.

Let me try to paraphrase it, probably not doing the above discussion the justice it deserves:

  1. Suppose your moral intuition says that it is bad to create inequality by randomly giving some people more than others, even if no one really ends up worse off when considered in isolation.

  2. Suppose you also believe that probability is actuality distributed over multiple real worlds, not just possible worlds.

  3. Then flipping a coin and giving someone something they want, but only if the coin lands heads is morally reprehensible because you create inequality between the version of the recipient that got something and the one who that did not.

If you subscribe to something like that, then the consequences are far-reaching, and potentially paralyzing. And if a hypothetical God or some future AGI cares about this, you may get Roko’ed for it in the afterlife simulation, despite your best intentions.