I don’t see how the SIA refutes the complete DA (Doomsday Argument).
The SIA shows that a universe with more observers in your reference class is more likely. This is the set used when “considering myself as a random observer drawn from the space of all possible observers”—it’s not really all possible observers.
How small is this set? Well, if we rely on just the argument given here for SIA, it’s very small indeed. Suppose the experimenter stipulates an additional rule: he flips a second coin; if it comes up heads, he creates 10^10 extrea copies of you; if tails, he does nothing. However, these extra copies are not created inside rooms at all. You know you’re not one of them, because you’re in one of the rooms. The outcome of the second coin flip is made known to you. But it clearly doesn’t influence your bet on their doors’ colors, even when it increases the number of observers in your universe 10^8 times, and even though these extra observers are complete copies of your life up to this point, who are only placed in a different situation from you in the last second.
Now, the DA can be reformulated: instead of the set of all humans ever to live, consider the set of all humans (or groups of humans) who would never confuse themselves with one another. In this set the SIA doesn’t apply (we don’t predict that a bigger set is more likely). The DA does apply, because humans from different eras are dissimilar and can be indexed as the DA requires. To illustrate, I expect that if I were taken at any point in my life and instantly placed at some point of Leonardo da Vinci’s life, I would very quickly realize something was wrong.
Presumed conclusion: if humanity does not become extinct totally, expect other humans to be more and more similar to yourself as time passes, until you survive only in a universe inhabited by a Huge Number of Clones
It also appears that I should assign very high probability to the chance that a non-Friendly super-intelligent AI destroys the rest of humanity to tile the universe with copies of myself in tiny life-support bubbles. Or with simulators running my life up to then in a loop forever.
Maybe I’m just really tired, but I seem to have grown a blind spot hiding a logical step that must be present in the argument given for SIA. It doesn’t seem to be arguing for the SIA at all, just for the right way of detecting a blue door independent of the number of observers.
Consider this variation: there are 150 rooms, 149 of them blue and 1 red. In the blue rooms, 49 cats and 99 human clones are created; in the red room, a human clone is created. The experiment then proceeds in the usual way (flipping the coin and killing inhabitants of rooms of a certain color).
The humans will still give a .99 probability of being behind a blue door, and 99 out of 100 equally-probable potential humans will be right. Therefore you are more likely to inhabit a universe shared by an equal number of humans and cats, than a universe containing only humans (the Feline Indication Argument).
If you are told that you are in that situation, then you would assign a probability of 50⁄51 of being behind a blue door, and a 1⁄51 probability of being behind a red door, because you would not assign any probability to the possibility of being one of the cats. So you will not give a probability of .99 in this case.
I don’t see how the SIA refutes the complete DA (Doomsday Argument).
The SIA shows that a universe with more observers in your reference class is more likely. This is the set used when “considering myself as a random observer drawn from the space of all possible observers”—it’s not really all possible observers.
How small is this set? Well, if we rely on just the argument given here for SIA, it’s very small indeed. Suppose the experimenter stipulates an additional rule: he flips a second coin; if it comes up heads, he creates 10^10 extrea copies of you; if tails, he does nothing. However, these extra copies are not created inside rooms at all. You know you’re not one of them, because you’re in one of the rooms. The outcome of the second coin flip is made known to you. But it clearly doesn’t influence your bet on their doors’ colors, even when it increases the number of observers in your universe 10^8 times, and even though these extra observers are complete copies of your life up to this point, who are only placed in a different situation from you in the last second.
Now, the DA can be reformulated: instead of the set of all humans ever to live, consider the set of all humans (or groups of humans) who would never confuse themselves with one another. In this set the SIA doesn’t apply (we don’t predict that a bigger set is more likely). The DA does apply, because humans from different eras are dissimilar and can be indexed as the DA requires. To illustrate, I expect that if I were taken at any point in my life and instantly placed at some point of Leonardo da Vinci’s life, I would very quickly realize something was wrong.
Presumed conclusion: if humanity does not become extinct totally, expect other humans to be more and more similar to yourself as time passes, until you survive only in a universe inhabited by a Huge Number of Clones
It also appears that I should assign very high probability to the chance that a non-Friendly super-intelligent AI destroys the rest of humanity to tile the universe with copies of myself in tiny life-support bubbles. Or with simulators running my life up to then in a loop forever.
Maybe I’m just really tired, but I seem to have grown a blind spot hiding a logical step that must be present in the argument given for SIA. It doesn’t seem to be arguing for the SIA at all, just for the right way of detecting a blue door independent of the number of observers.
Consider this variation: there are 150 rooms, 149 of them blue and 1 red. In the blue rooms, 49 cats and 99 human clones are created; in the red room, a human clone is created. The experiment then proceeds in the usual way (flipping the coin and killing inhabitants of rooms of a certain color).
The humans will still give a .99 probability of being behind a blue door, and 99 out of 100 equally-probable potential humans will be right. Therefore you are more likely to inhabit a universe shared by an equal number of humans and cats, than a universe containing only humans (the Feline Indication Argument).
If you are told that you are in that situation, then you would assign a probability of 50⁄51 of being behind a blue door, and a 1⁄51 probability of being behind a red door, because you would not assign any probability to the possibility of being one of the cats. So you will not give a probability of .99 in this case.
Fixed, thanks. (I didn’t notice at first that I quoted the .99 number.)