Okay, I’m clearly confused. I was thinking the Doomsday Argument tilted the evidence in one direction, and then the SIA needed to tilt the evidence in the other direction, and worrying about how the SIA doesn’t look capable of tilting evidence. I’m not sure why that’s the wrong way to look at it, but what you said is definitely right, so I’m making a mistake somewhere. Time to fret over this until it makes sense.
I was thinking the Doomsday Argument tilted the evidence in one direction, and then the SIA needed to tilt the evidence in the other direction
Correct. On SIA, you start out certain that humanity will continue forever due to SIA, and then update on the extremely startling fact that you’re in 2009, leaving you with the mere surface facts of the matter. If you start out with your reference class only in 2009 - a rather nontimeless state of affairs—then you end up in the same place as after the update.
Simulation argument goes through even if Doomsday fails. If almost everyone who experiences 2009 does so inside a simulation, and you can’t tell if you’re in a simulation or not—assuming that statement is even meaningful—then you’re very likely “in” such a simulation (if such a statement is even meaningful). Doomsday is a lot more controversial; it says that even if most people like you are genuinely in 2009, you should assume from the fact that you are one of those people, rather than someone else, that the fraction of population that experiences being 2009 is much larger to be a large fraction of the total (because we never go on to create trillions of descendants) than a small fraction of the total (if we do).
The probability of being in a simulation increases the probability of doom, since people in a simulation have a chance of being turned off, which people in a real world presumably do not have.
The regular Simulation Argument concludes with a disjunction (you have logical uncertainty about whether civilizations very strongly convergently fail to produce lots of simulations). SIA prevents us from accepting two of the disjuncts, since the population of observers like us is so much greater if lots of sims are made.
If you start out certain that humanity will continue forever, won’t you conclude that all evidence that you’re in 2009 is flawed? Humanity must have been going on for longer than that.
“On SIA, you start out certain that humanity will continue forever due to SIA”
SIA doesn’t give you that. SIA just says that people from a universe with a population of n don’t mysteriously count as only 1/nth of a person. In itself it tells you nothing about the average population per universe.
If there are a million universes with a population of 1000 each, and one universe with a population of 1000000, you ought to find yourself in one of the universes with a population of 1000.
We agree there (I just meant more likely to be in the 1000000 one than any given 1000 one). If there are any that have infinitely many people (eg go on forever), you are almost certainly in one of those.
That still depends on an assumption about the demographics of universes. If there are finitely many universes that are infinitely populated, but infinitely many that are finitely populated, the latter still have a chance to outweigh the former. I concede that if you can have an infinitely populated universe at all, you ought to have infinitely many variations on it, and so infinity ought to win.
Actually I think there is some confusion or ambiguity about the meaning of SIA here. In his article Stuart speaks of a non-intuitive and an intuitive formulation of SIA. The intuitive one is that you should consider yourself a random sample. The non-intuitive one is that you should prefer many-observer hypotheses. Stuart’s “intuitive” form of SIA, I am used to thinking of as SSA, the self-sampling assumption. I normally assume SSA but our radical ignorance about the actual population of the universe/multiverse makes it problematic to apply. The “non-intuitive SIA” seems to be a principle for choosing among theories about multiverse demographics but I’m not convinced of its validity.
Intuitive SIA = consider yourself a random sample out of all possible people
SSA = consider yourself a random sample from people in each given universe separately
e.g. if there are ten people and half might be you in one universe, and one person who might be you in another,
SIA: a greater proportion of those who might be you are in the first
SSA: a greater proportion of the people in the second might be you
Whoa.
Okay, I’m clearly confused. I was thinking the Doomsday Argument tilted the evidence in one direction, and then the SIA needed to tilt the evidence in the other direction, and worrying about how the SIA doesn’t look capable of tilting evidence. I’m not sure why that’s the wrong way to look at it, but what you said is definitely right, so I’m making a mistake somewhere. Time to fret over this until it makes sense.
PS: Why are people voting this up?!?
Correct. On SIA, you start out certain that humanity will continue forever due to SIA, and then update on the extremely startling fact that you’re in 2009, leaving you with the mere surface facts of the matter. If you start out with your reference class only in 2009 - a rather nontimeless state of affairs—then you end up in the same place as after the update.
If civilization lasts forever, there can be many simulations of 2009, so updating on your sense-data can’t overcome the extreme initial SIA update.
Simulation argument is a separate issue from the Doomsday Argument.
What? They have no implications for each other? The possibility of being in a simulation doesn’t affect my estimates for the onset of Doomsday?
Why is that? Because they have different names?
Simulation argument goes through even if Doomsday fails. If almost everyone who experiences 2009 does so inside a simulation, and you can’t tell if you’re in a simulation or not—assuming that statement is even meaningful—then you’re very likely “in” such a simulation (if such a statement is even meaningful). Doomsday is a lot more controversial; it says that even if most people like you are genuinely in 2009, you should assume from the fact that you are one of those people, rather than someone else, that the fraction of population that experiences being 2009 is much larger to be a large fraction of the total (because we never go on to create trillions of descendants) than a small fraction of the total (if we do).
The probability of being in a simulation increases the probability of doom, since people in a simulation have a chance of being turned off, which people in a real world presumably do not have.
The regular Simulation Argument concludes with a disjunction (you have logical uncertainty about whether civilizations very strongly convergently fail to produce lots of simulations). SIA prevents us from accepting two of the disjuncts, since the population of observers like us is so much greater if lots of sims are made.
If you start out certain that humanity will continue forever, won’t you conclude that all evidence that you’re in 2009 is flawed? Humanity must have been going on for longer than that.
Yes this is exactly right.
“On SIA, you start out certain that humanity will continue forever due to SIA”
SIA doesn’t give you that. SIA just says that people from a universe with a population of n don’t mysteriously count as only 1/nth of a person. In itself it tells you nothing about the average population per universe.
If you are in a universe SIA tells you it is most likely the most populated one.
If there are a million universes with a population of 1000 each, and one universe with a population of 1000000, you ought to find yourself in one of the universes with a population of 1000.
We agree there (I just meant more likely to be in the 1000000 one than any given 1000 one). If there are any that have infinitely many people (eg go on forever), you are almost certainly in one of those.
That still depends on an assumption about the demographics of universes. If there are finitely many universes that are infinitely populated, but infinitely many that are finitely populated, the latter still have a chance to outweigh the former. I concede that if you can have an infinitely populated universe at all, you ought to have infinitely many variations on it, and so infinity ought to win.
Actually I think there is some confusion or ambiguity about the meaning of SIA here. In his article Stuart speaks of a non-intuitive and an intuitive formulation of SIA. The intuitive one is that you should consider yourself a random sample. The non-intuitive one is that you should prefer many-observer hypotheses. Stuart’s “intuitive” form of SIA, I am used to thinking of as SSA, the self-sampling assumption. I normally assume SSA but our radical ignorance about the actual population of the universe/multiverse makes it problematic to apply. The “non-intuitive SIA” seems to be a principle for choosing among theories about multiverse demographics but I’m not convinced of its validity.
Intuitive SIA = consider yourself a random sample out of all possible people
SSA = consider yourself a random sample from people in each given universe separately
e.g. if there are ten people and half might be you in one universe, and one person who might be you in another, SIA: a greater proportion of those who might be you are in the first SSA: a greater proportion of the people in the second might be you
A great principle to live by (aka “taking a stand against cached thought”). We should probably have a post on that.
It seems to be taking time to cache the thought.