Dead men tell tales: falling out of love with SIA

SIA is the Self Indication Assumption, an anthropic theory about how we should reason about the universe given that we exist. I used to love it; the argument that I’ve found most convincing about SIA was the one I presented in this post. Recently, I’ve been falling out of love with SIA, and moving more towards a UDT version of anthropics (objective probabilities and total impact of your decision being of a specific type, including in all copies of you and enemies with the same decision process). So it’s time I revisit my old post, and find the hole.

The argument rested on the plausible sounding assumption that creating extra copies and killing them is no different from if they hadn’t existed in the first place. More precisely, it rested on the assumption that if I was told “You are not one of the agents I am about to talk about. Extra copies were created to be destroyed,” it was exactly the same as hearing “Extra copies were created to be destroyed. And you’re not one of them.”

But I realised that from the UDT/​TDT perspective, there is a great difference between the two situations, if I have the time to update decisions in the course of the sentence. Consider the following three scenarios:

  • Scenario 1 (SIA):

Two agents are created, then one is destroyed with 50% probability. Each living agent is entirely selfish, with utility linear in money, and the dead agent gets nothing. Every survivor will be presented with the same bet. Then you should take the SIA 2:1 odds that you are in the world with two agents. This is the scenario I was assuming.

  • Scenario 2 (SSA):

Two agents are created, then one is destroyed with 50% probability. Each living agent is entirely selfish, with utility linear in money, and the dead agent is altruistic towards his survivor. This is similar to my initial intuition in this post. Note that every agents have the same utility: “as long as I live, I care about myself, but after I die, I’ll care about the other guy”, so you can’t distinguish them based on their utility. As before, every survivor will be presented with the same bet.

Here, once you have been told the scenario, but before knowing whether anyone has been killed, you should pre-commit to taking 1:1 odds that you are in the world with two agents. And in UDT/​TDT precommitting is the same as making the decision.

  • Scenario 3 (reverse SIA):

Same as before, except the dead agent is triply altruistic toward his survivor (you can replace this altruism with various cash being donated to various charities of value to various agents). Then you should pre-commit to taking 1:2 odds that you are in the world with two agents.

This illustrates the importance of the utility of the dead agent in determining the decision of the living ones, if there is even a short moment when you believe you might be the agent who is due to die. By scaling the altruism or hatred of the dead man, you can get any odds you like between the two worlds.

So I was wrong; dead men tell tales, and even thinking you might be one of them will change your behaviour.