After the discussion in my previous post I became quite certain that the world can’t work as indicated by SSA (your model), and SIA is by far more likely. If you’re the only person in the world right now, and Omega is about to flip a fair coin and create 100 people in case of heads, then SSA tells you to be 99% sure of tails, while SIA says 50⁄50. There’s just no way SSA is right on this one.
Bostrom talks about such paradoxes in chapter 9 of his book, then tries really hard to defend SSA, and fails. (You have to read and settle this for yourself. It’s hard to believe Bostrom can fail. I was surprised.)
Also maybe it’ll help if you translate the problem into UDT-speak, “probability as caring”. Believing in SSA means you care about copies of yourself in little worlds much more than about your copies in big worlds. SIA means you care about them equally.
Now might be a good time to mention “full non-indexical conditioning”, which I think is incontestably an advance on SSA and SIA.
To be sure, FNC still faces the severe problem that observer-moments cannot be individuated, leading (for instance) to variations on Sleeping Beauty where tails causes only a ‘partial split’ (like an Ebborian midway through dividing) and the answer is indeterminate. But this is no less of a problem for SSA and SIA than for FNC. The UDT approach of bypassing the ‘Bayesian update’ stage and going straight to the question ‘what should I do?’ is superior.
Neal’s approach (even according to Neal) doesn’t work in Big Worlds, because then every observation occurs at least once. But full non-indexical conditioning tells us with near certainty that we are in a Big World. So if you buy the approach, it immediately tells you with near certainty that you’re in the conditions under which it doesn’t work.
What I especially like about FNC is that it refuses to play the anthropic game at all. That is, it doesn’t pretend that you can ‘unwind all of a person’s observations’ while retaining their Mind Essence and thereby return to an anthropic prior under which ‘I’ had just as much chance of being you as me. (In other words, it doesn’t commit you to believing that you are an ‘epiphenomenal passenger’.)
FNC is just ‘what you get if you try to answer those questions for which anthropic reasoning is typically used, without doing something that doesn’t make any sense’. (Or at least it would be if there was a canonical way of individuating states-of-information.)
If you’re the only person in the world right now, and Omega is about to flip a fair coin and create 100 people in case of heads, then SSA tells you to be 99% sure of tails, while SIA says 50⁄50. There’s just no way SSA is right on this one.
If the program has already generated one problem and added it to P, and then generates 1 or 0 randomly for W and adds 100W problems to P—which is basically the same as my first model, and should be equivalent to SSA—then I should expect a 50% chance of having 1 problem in P and a 50% chance of having 101 problems in P, and also a 50% chance of W=1.
If it does the above, and then generates a random number X between 1 and 101, and only presents me with a problem if there’s a problem numbered X, and I get shown a problem, I should predict a ~99% chance that W=1. I think this is mathematically equivalent to SIA. (It is if my second formulation in the OP is, which I think is true even if it’s rather round-about.)
If the program has already generated one problem and added it to P, and then generates 1 or 0 randomly for W and adds 100W problems to P—which is basically the same as my first model, and should be equivalent to SSA—then I should expect a 50% chance of having 1 problem in P and a 50% chance of having 101 problems in P, and also a 50% chance of W=1.
Yeah, that’s what SSA says you should expect before updating :-) In my example you already know that you’re the first person, but don’t know if the other 100 will be created or not. In your terms this is equivalent to updating on the fact that you have received math problem number 1, which gives you high confidence that the fair coinflip in the future will come out a certain way.
And after updating, as well. The first math problem tells you basically nothing, since it happens regardless of the result of the coin flip/generated random number.
Ignore the labels for a minute. Say I have a box, and I tell you that I flipped a coin earlier and put one rock in the box if it was heads and two rocks in the box if it was tails. I then take a rock out of the box. What’s the chance that the box is now empty? How about if I put three rocks in for tails instead of two?
I refuse to ignore the labels! :-) Drawing the first math problem tells me a lot, because it’s much more likely in a world with 1 math problem than in a world with 101 math problems. That’s the whole point. It’s not equivalent to drawing a math problem and refusing to look at the label.
Let’s return to the original formulation in your post. I claim that being shown P(1) makes W=0 much more likely than W=1. Do you agree?
If I know that it’s P(1), and I know that it was randomly selected from all the generated problems (rather than being shown to me because it’s the first one), then yes.
If I’m shown a single randomly selected problem from the list of generated problems without being told which problem number it is, it doesn’t make W=0 more likely than W=1 or W=2.
After the discussion in my previous post I became quite certain that the world can’t work as indicated by SSA (your model), and SIA is by far more likely. If you’re the only person in the world right now, and Omega is about to flip a fair coin and create 100 people in case of heads, then SSA tells you to be 99% sure of tails, while SIA says 50⁄50. There’s just no way SSA is right on this one.
Bostrom talks about such paradoxes in chapter 9 of his book, then tries really hard to defend SSA, and fails. (You have to read and settle this for yourself. It’s hard to believe Bostrom can fail. I was surprised.)
To be fair, Bostrom’s version of SSA (“strong” SSA, or SSSA) does not “[tell] you to be 99% sure of tails” when you are still the only person in the world. In whatever sense his defense might fail, it is not because his SSSA leads to the implication that you describe, because it does not.
ETA: Prior to the copying, there is only one individual in your reference class—namely, the one copy of you. That is, the “reference class” contains only a single individual in all cases, so there is no anthropic selection effect. Therefore, SSSA still says 50⁄50 in this situation.
Bostrom’s proposal fails even harder than “naive” SSA: it refuses to give a definite answer. He says selecting a reference class may be a “subjective” problem, like selecting a Bayesian prior. Moreover, he says that giving the “intuitively right” answer to problems like mine is one of the desiderata for a good reference class, not a consequence of his approach. See this chapter.
Re your ETA: Bostrom explicitly rejects the idea that you should always use subjectively indistinguishable observer-moments as your reference class.
Bostrom’s proposal fails even harder than “naive” SSA: it refuses to give a definite answer. He says selecting a reference class may be a “subjective” problem, like selecting a Bayesian prior. Moreover, he says that giving the “intuitively right” answer to problems like mine is one of the desiderata for a good reference class, not a consequence of his approach.
He does not solve the problem of defining the reference class. He doesn’t refuse to give a definite answer. He just doesn’t claim to have given one yet. As you say, he leaves open the possibility that choosing the reference class is like choosing a Bayesian prior, but he only offers this as a possibility. Even while he allows for this possibility, he seems to expect that more can be said “objectively” about what the reference class must be than what he has figured out so far.
So, it’s a work in progress. If it fails, it certainly isn’t because it gives the wrong answer on the coin problem that you posed.
To me it looks abandoned, not in progress. And it doesn’t give any definite answer. And it’s not clear to me whether it can be patched to give the correct answer and still be called “SSA” (i.e. still support some version of the Doomsday argument). For example, your proposed patch (using indistinguishable observers as the reference class) gives the same results as SIA and doesn’t support the DA.
Anyway. We have a better way to think about anthropic problems now: UDT! It gives the right answer in my problem, and makes the DA go away, and solves a whole host of other issues. So I don’t understand why anyone should think about SSA or Bostrom’s approach anymore. If you think they’re still useful, please explain.
Anyway. We have a better way to think about anthropic problems now: UDT! It gives the right answer in my problem, and makes the DA go away, and solves a whole host of other issues. So I don’t understand why anyone should think about SSA or Bostrom’s approach anymore. If you think they’re still useful, please explain.
When it comes to deciding how to act, I agree that the UDT approach to anthropic puzzles is the best I know. Thinking about anthropics in the traditional way, whether via SSA, SIA, or any of the other approaches, only makes sense if you want to isolate a canonical epistemic probability factor in the expected-utility calculation.
I’m still not clear on why anyone would think that the world works as indicated by SIA,
I also don’t see the appeal of SIA. As far as I know, its only selling point is that it nullifies the Doomsday Argument. But that doesn’t seem to me to be the right basis for choosing a method of anthropic reasoning.
Moreover, Katja Grace points out that even SIA implies “Doomsday” in the sense that SIA, with some reasonable assumptions, makes the Great Filter likely to be ahead of us instead of behind us. For it seems plausible that, among the universes with Great Filters, most individuals live prior to their lineage’s getting hit with the Great Filter. So, if we update on the fact that we live in a universe with a Great Filter (which follows from the Fermi Paradox), then SIA tells us to expect that our Great Filter is in our future, not in our past (as it would be if the Great Filter were something like the difficulty of evolving intelligence).
Katja agrees that this only holds if you assume we are not simulations. SIA hugely supports the simulation hypothesis, and then the SIA-Doomsday argument fails.
Hmm. It seems to me that Katja’s argument fails if huge interstellar civilizations are likely to stop other civilizations from reaching our current stage (deliberately or unwittingly), which sounds plausible to me.
It seems to me that Katja’s argument fails if huge interstellar civilizations are likely to stop other civilizations from reaching our current stage (deliberately or unwittingly), which sounds plausible to me.
Could you explain? Wouldn’t that just tell you with even greater certainty that there are no huge interstellar civilizations around, which would argue even more strongly that we live in a universe with a Great Filter? And couldn’t it still be the case that most individuals would live prior to their lineage’s encounter with the Great Filter? So, why wouldn’t Katja’s argument still go through?
ETA: Okay, I think that I see your point: If, in each universe where life arises, some civilization gets huge and nips all other life in that universe in the bud, and if the civilization gets so huge that it outnumbers the sum of the populations of all the lineages that it squelches, then it would not be the case that “most individuals live prior to their lineage’s getting hit with the Great Filter”. On the contrary, across all possible worlds, most individuals would live in one of these huge civilizations, which never get hit with a Great Filter. In that case, Katja’s argument would not go through.
I think that a lot of people don’t consider “We just happen to be the first technical civilization” to be a satisfactory solution to the Fermi paradox. It is the fact that this region wasn’t already teeming with life that points to the presence of a Great Filter.
Your proposal conjoins this response to the Fermi paradox with the further claim that we will go on to squelch any subsequent technical civilizations. So your proposal can only be less satisfying than the above response to the Fermi paradox. The problem is that, if we are going to be this region’s Great Filter, then we have come too late to explain why this region isn’t already teeming with life.
Ahh. Okay, that makes sense.
I’m still not clear on why anyone would think that the world works as indicated by SIA, but that seems likely to be a rather less confusing problem.
After the discussion in my previous post I became quite certain that the world can’t work as indicated by SSA (your model), and SIA is by far more likely. If you’re the only person in the world right now, and Omega is about to flip a fair coin and create 100 people in case of heads, then SSA tells you to be 99% sure of tails, while SIA says 50⁄50. There’s just no way SSA is right on this one.
Bostrom talks about such paradoxes in chapter 9 of his book, then tries really hard to defend SSA, and fails. (You have to read and settle this for yourself. It’s hard to believe Bostrom can fail. I was surprised.)
Also maybe it’ll help if you translate the problem into UDT-speak, “probability as caring”. Believing in SSA means you care about copies of yourself in little worlds much more than about your copies in big worlds. SIA means you care about them equally.
Now might be a good time to mention “full non-indexical conditioning”, which I think is incontestably an advance on SSA and SIA.
To be sure, FNC still faces the severe problem that observer-moments cannot be individuated, leading (for instance) to variations on Sleeping Beauty where tails causes only a ‘partial split’ (like an Ebborian midway through dividing) and the answer is indeterminate. But this is no less of a problem for SSA and SIA than for FNC. The UDT approach of bypassing the ‘Bayesian update’ stage and going straight to the question ‘what should I do?’ is superior.
Neal’s approach (even according to Neal) doesn’t work in Big Worlds, because then every observation occurs at least once. But full non-indexical conditioning tells us with near certainty that we are in a Big World. So if you buy the approach, it immediately tells you with near certainty that you’re in the conditions under which it doesn’t work.
Sure, that’s a fair criticism.
What I especially like about FNC is that it refuses to play the anthropic game at all. That is, it doesn’t pretend that you can ‘unwind all of a person’s observations’ while retaining their Mind Essence and thereby return to an anthropic prior under which ‘I’ had just as much chance of being you as me. (In other words, it doesn’t commit you to believing that you are an ‘epiphenomenal passenger’.)
FNC is just ‘what you get if you try to answer those questions for which anthropic reasoning is typically used, without doing something that doesn’t make any sense’. (Or at least it would be if there was a canonical way of individuating states-of-information.)
If the program has already generated one problem and added it to P, and then generates 1 or 0 randomly for W and adds 100W problems to P—which is basically the same as my first model, and should be equivalent to SSA—then I should expect a 50% chance of having 1 problem in P and a 50% chance of having 101 problems in P, and also a 50% chance of W=1.
If it does the above, and then generates a random number X between 1 and 101, and only presents me with a problem if there’s a problem numbered X, and I get shown a problem, I should predict a ~99% chance that W=1. I think this is mathematically equivalent to SIA. (It is if my second formulation in the OP is, which I think is true even if it’s rather round-about.)
Yeah, that’s what SSA says you should expect before updating :-) In my example you already know that you’re the first person, but don’t know if the other 100 will be created or not. In your terms this is equivalent to updating on the fact that you have received math problem number 1, which gives you high confidence that the fair coinflip in the future will come out a certain way.
And after updating, as well. The first math problem tells you basically nothing, since it happens regardless of the result of the coin flip/generated random number.
Ignore the labels for a minute. Say I have a box, and I tell you that I flipped a coin earlier and put one rock in the box if it was heads and two rocks in the box if it was tails. I then take a rock out of the box. What’s the chance that the box is now empty? How about if I put three rocks in for tails instead of two?
I refuse to ignore the labels! :-) Drawing the first math problem tells me a lot, because it’s much more likely in a world with 1 math problem than in a world with 101 math problems. That’s the whole point. It’s not equivalent to drawing a math problem and refusing to look at the label.
Let’s return to the original formulation in your post. I claim that being shown P(1) makes W=0 much more likely than W=1. Do you agree?
If I know that it’s P(1), and I know that it was randomly selected from all the generated problems (rather than being shown to me because it’s the first one), then yes.
If I’m shown a single randomly selected problem from the list of generated problems without being told which problem number it is, it doesn’t make W=0 more likely than W=1 or W=2.
To be fair, Bostrom’s version of SSA (“strong” SSA, or SSSA) does not “[tell] you to be 99% sure of tails” when you are still the only person in the world. In whatever sense his defense might fail, it is not because his SSSA leads to the implication that you describe, because it does not.
ETA: Prior to the copying, there is only one individual in your reference class—namely, the one copy of you. That is, the “reference class” contains only a single individual in all cases, so there is no anthropic selection effect. Therefore, SSSA still says 50⁄50 in this situation.
Bostrom’s proposal fails even harder than “naive” SSA: it refuses to give a definite answer. He says selecting a reference class may be a “subjective” problem, like selecting a Bayesian prior. Moreover, he says that giving the “intuitively right” answer to problems like mine is one of the desiderata for a good reference class, not a consequence of his approach. See this chapter.
Re your ETA: Bostrom explicitly rejects the idea that you should always use subjectively indistinguishable observer-moments as your reference class.
Right. I don’t think that I implied otherwise . . .
He does not solve the problem of defining the reference class. He doesn’t refuse to give a definite answer. He just doesn’t claim to have given one yet. As you say, he leaves open the possibility that choosing the reference class is like choosing a Bayesian prior, but he only offers this as a possibility. Even while he allows for this possibility, he seems to expect that more can be said “objectively” about what the reference class must be than what he has figured out so far.
To me it looks abandoned, not in progress. And it doesn’t give any definite answer. And it’s not clear to me whether it can be patched to give the correct answer and still be called “SSA” (i.e. still support some version of the Doomsday argument). For example, your proposed patch (using indistinguishable observers as the reference class) gives the same results as SIA and doesn’t support the DA.
Anyway. We have a better way to think about anthropic problems now: UDT! It gives the right answer in my problem, and makes the DA go away, and solves a whole host of other issues. So I don’t understand why anyone should think about SSA or Bostrom’s approach anymore. If you think they’re still useful, please explain.
When it comes to deciding how to act, I agree that the UDT approach to anthropic puzzles is the best I know. Thinking about anthropics in the traditional way, whether via SSA, SIA, or any of the other approaches, only makes sense if you want to isolate a canonical epistemic probability factor in the expected-utility calculation.
In the context of the Doomsday Argument, or Great Filter arguments, etc., UDT is typically equivalent to SIA.
I also don’t see the appeal of SIA. As far as I know, its only selling point is that it nullifies the Doomsday Argument. But that doesn’t seem to me to be the right basis for choosing a method of anthropic reasoning.
Moreover, Katja Grace points out that even SIA implies “Doomsday” in the sense that SIA, with some reasonable assumptions, makes the Great Filter likely to be ahead of us instead of behind us. For it seems plausible that, among the universes with Great Filters, most individuals live prior to their lineage’s getting hit with the Great Filter. So, if we update on the fact that we live in a universe with a Great Filter (which follows from the Fermi Paradox), then SIA tells us to expect that our Great Filter is in our future, not in our past (as it would be if the Great Filter were something like the difficulty of evolving intelligence).
Katja agrees that this only holds if you assume we are not simulations. SIA hugely supports the simulation hypothesis, and then the SIA-Doomsday argument fails.
Hmm. It seems to me that Katja’s argument fails if huge interstellar civilizations are likely to stop other civilizations from reaching our current stage (deliberately or unwittingly), which sounds plausible to me.
Could you explain? Wouldn’t that just tell you with even greater certainty that there are no huge interstellar civilizations around, which would argue even more strongly that we live in a universe with a Great Filter? And couldn’t it still be the case that most individuals would live prior to their lineage’s encounter with the Great Filter? So, why wouldn’t Katja’s argument still go through?
ETA: Okay, I think that I see your point: If, in each universe where life arises, some civilization gets huge and nips all other life in that universe in the bud, and if the civilization gets so huge that it outnumbers the sum of the populations of all the lineages that it squelches, then it would not be the case that “most individuals live prior to their lineage’s getting hit with the Great Filter”. On the contrary, across all possible worlds, most individuals would live in one of these huge civilizations, which never get hit with a Great Filter. In that case, Katja’s argument would not go through.
Yep, that’s what I meant. I wonder if anyone raised this point before, it sounds kinda obvious.
I think that a lot of people don’t consider “We just happen to be the first technical civilization” to be a satisfactory solution to the Fermi paradox. It is the fact that this region wasn’t already teeming with life that points to the presence of a Great Filter.
Your proposal conjoins this response to the Fermi paradox with the further claim that we will go on to squelch any subsequent technical civilizations. So your proposal can only be less satisfying than the above response to the Fermi paradox. The problem is that, if we are going to be this region’s Great Filter, then we have come too late to explain why this region isn’t already teeming with life.