# Avoiding doomsday: a “proof” of the self-indication assumption

The doomsday argument, in its simplest form, claims that since ^{2}⁄_{3} of all humans will be in the final ^{2}⁄_{3} of all humans, we should conclude it is more likely we are in the final two thirds of all humans who’ve ever lived, than in the first third. In our current state of quasi-exponential population growth, this would mean that we are likely very close to the final end of humanity. The argument gets somewhat more sophisticated than that, but that’s it in a nutshell.

There are many immediate rebuttals that spring to mind—there is something about the doomsday argument that brings out the certainty in most people that it **must **be wrong. But nearly all those supposed rebuttals are erroneous (see Nick Bostrom’s book Anthropic Bias: Observation Selection Effects in Science and Philosophy). Essentially the only consistent low-level rebuttal to the doomsday argument is to use the self indication assumption (SIA).

The non-intuitive form of SIA simply says that since you exist, it is more likely that your universe contains many observers, rather than few; the more intuitive formulation is that you should consider yourself as a random observer drawn from the space of possible observers (weighted according to the probability of that observer existing).

Even in that form, it may seem counter-intuitive; but I came up with a series of small steps leading from a generally accepted result straight to the SIA. This clinched the argument for me. The starting point is:

**A—**A hundred people are created in a hundred rooms. Room 1 has a red door (on the outside), the outsides of all other doors are blue. You wake up in a room, fully aware of these facts; what probability should you put on being inside a room with a blue door?

Here, the probability is certainly 99%. But now consider the situation:**B**—same as before, but an hour after you wake up, it is announced that a coin will be flipped, and if it comes up heads, the guy behind the red door will be killed, and if it comes up tails, everyone behind a blue door will be killed. A few minutes later, it is announced that whoever was to be killed has been killed. What are your odds of being blue-doored now?

There should be no difference from A; since your odds of dying are exactly fifty-fifty whether you are blue-doored or red-doored, your probability estimate should not change upon being updated. The further modifications are then:**C**—same as B, except the coin is flipped before you are created (the killing still happens later).**D**—same as C, except that you are only made aware of the rules of the set-up after the people to be killed have already been killed.**E**—same as C, except the people to be killed are killed before awakening.**F**—same as C, except the people to be killed are simply not created in the first place.

I see no justification for changing your odds as you move from A to F; but 99% odd of being blue-doored at F is precisely the SIA: you are saying that a universe with 99 people in it is 99 times more probable than a universe with a single person in it.

If you can’t see any flaw in the chain either, then you can rest easy, knowing the human race is no more likely to vanish than objective factors indicate (ok, maybe you won’t rest that easy, in fact...)

(Apologies if this post is preaching to the choir of flogged dead horses along well beaten tracks: I was unable to keep up with Less Wrong these past few months, so may be going over points already dealt with!)

EDIT: Corrected the language in the presentation of the SIA, after

- Anthropics made easy? by 14 Jun 2018 0:56 UTC; 32 points) (
- “Solving” selfishness for UDT by 27 Oct 2014 17:51 UTC; 27 points) (
- Updateless anthropics by 20 Feb 2011 19:23 UTC; 9 points) (
- Correlated decision making: a complete theory by 26 Sep 2009 11:47 UTC; 7 points) (
- Anthropic reasoning and correlated decision making by 23 Sep 2009 15:10 UTC; 6 points) (
- Self-indication assumption is wrong for interesting reasons by 16 Apr 2010 4:51 UTC; 6 points) (
- 14 May 2010 15:14 UTC; 3 points) 's comment on Updating, part 1: When can you change your mind? The binary model by (
- 22 Feb 2011 9:00 UTC; 0 points) 's comment on Updateless anthropics by (
- 12 May 2010 16:52 UTC; 0 points) 's comment on Beauty quips, “I’d shut up and multiply!” by (
- 10 May 2010 12:09 UTC; 0 points) 's comment on What are our domains of expertise? A marketplace of insights and issues by (
- 8 Oct 2009 0:43 UTC; 0 points) 's comment on The Presumptuous Philosopher’s Presumptuous Friend by (
- 22 Jul 2017 15:32 UTC; 0 points) 's comment on Open thread, Jul. 17 - Jul. 23, 2017 by (

I upvoted this and I think you proved SIA in a very clever way, but I still don’t quite understand why SIA counters the Doomsday argument.

Imagine two universes identical to our own up to the present day. One universe is destined to end in 2010 after a hundred billion humans have existed, the other in 3010 after a hundred trillion humans have existed. I agree that knowing nothing, we would expect a random observer to have a thousand times greater chance of living in the long-lasting universe.

But given that we know this particular random observer is alive in 2009, I would think there’s an equal chance of them being in both universes, because both universes contain an equal number of people living in 2009. So my knowledge that I’m living in 2009 screens off any information I should be able to get from the SIA about whether the universe ends in 2010 or 3010. Why can you still use the SIA to prevent Doomsday?

[analogy: you have two sets of numbered balls. One is green and numbered from 1 to 10. The other is red and numbered from 1 to 1000. Both sets are mixed together. What’s the probability a randomly chosen ball is red? 1000/1010. Now I tell you the ball has number “6” on it. What’s the probability it’s red?

^{1}⁄_{2}. In this case, Doomsday argument still applies (any red or green ball will correctly give information about the number of red or green balls) but SIA doesn’t (any red or green ball,giventhat it’s a number shared by both red and green, gives no information on whether red or green is larger.)]You just did—early doom and late doom ended up equally probable, where an uncountered Doomsday argument would have said early doom is much more probable (because your living in 2009 is much more probable conditional on early doom than on late doom).

Whoa.

Okay, I’m clearly confused. I was thinking the Doomsday Argument tilted the evidence in one direction, and then the SIA needed to tilt the evidence in the other direction, and worrying about how the SIA doesn’t look capable of tilting evidence. I’m not sure why that’s the wrong way to look at it, but what you said is definitely right, so I’m making a mistake somewhere. Time to fret over this until it makes sense.

PS: Why are people voting this up?!?

Correct. On SIA, you start out certain that humanity will continue forever due to SIA, and then update on the extremely startling fact that you’re in 2009, leaving you with the mere surface facts of the matter. If you start out with your reference class only in 2009 - a rather nontimeless state of affairs—then you end up in the same place as after the update.

If civilization lasts forever, there can be many simulations of 2009, so updating on your sense-data can’t overcome the extreme initial SIA update.

Simulation argument is a separate issue from the Doomsday Argument.

What? They have

noimplications for each other? The possibility of being in a simulation doesn’t affect my estimates for the onset of Doomsday?Why is that? Because they have different names?

Simulation argument goes through even if Doomsday fails. If almost everyone who experiences 2009 does so inside a simulation, and you can’t tell if you’re in a simulation or not—assuming that statement is even meaningful—then you’re very likely “in” such a simulation (if such a statement is even meaningful). Doomsday is a lot more controversial; it says that even if most people like you are genuinely in 2009, you should assume from the fact that you are one of those people, rather than someone else, that the fraction of population that experiences being 2009 is much larger to be a large fraction of the total (because we never go on to create trillions of descendants) than a small fraction of the total (if we do).

The probability of being in a simulation increases the probability of doom, since people in a simulation have a chance of being turned off, which people in a real world presumably do not have.

The regular Simulation Argument concludes with a disjunction (you have logical uncertainty about whether civilizations very strongly convergently fail to produce lots of simulations). SIA prevents us from accepting two of the disjuncts, since the population of observers like us is so much greater if lots of sims are made.

If you start out certain that humanity will continue forever, won’t you conclude that all evidence that you’re in 2009 is flawed? Humanity must have been going on for longer than that.

Yes this is exactly right.

“On SIA, you start out certain that humanity will continue forever due to SIA”

SIA doesn’t give you that. SIA just says that people from a universe with a population of n don’t mysteriously count as only 1/nth of a person. In itself it tells you nothing about the average population per universe.

If you are in a universe SIA tells you it is most likely the most populated one.

If there are a million universes with a population of 1000 each, and one universe with a population of 1000000, you ought to find yourself in one of the universes with a population of 1000.

We agree there (I just meant more likely to be in the 1000000 one than any given 1000 one). If there are any that have infinitely many people (eg go on forever), you are almost certainly in one of those.

That still depends on an assumption about the demographics of universes. If there are finitely many universes that are infinitely populated, but infinitely many that are finitely populated, the latter still have a chance to outweigh the former. I concede that if you can have an infinitely populated universe at all, you ought to have infinitely many variations on it, and so infinity ought to win.

Actually I think there is some confusion or ambiguity about the meaning of SIA here. In his article Stuart speaks of a non-intuitive and an intuitive formulation of SIA. The intuitive one is that you should consider yourself a random sample. The non-intuitive one is that you should prefer many-observer hypotheses. Stuart’s “intuitive” form of SIA, I am used to thinking of as SSA, the self-sampling assumption. I normally assume SSA but our radical ignorance about the actual population of the universe/multiverse makes it problematic to apply. The “non-intuitive SIA” seems to be a principle for choosing among theories about multiverse demographics but I’m not convinced of its validity.

Intuitive SIA = consider yourself a random sample out of all possible people

SSA = consider yourself a random sample from people in each given universe separately

e.g. if there are ten people and half might be you in one universe, and one person who might be you in another, SIA: a greater proportion of those who might be you are in the first SSA: a greater proportion of the people in the second might be you

A great principle to live by (aka “taking a stand against cached thought”). We should probably have a post on that.

It seems to be taking time to cache the thought.

So it does. I was sufficiently caught up in Yvain’s elegant argument that I didn’t even notice that it supported that the opposite conclusion to that of the introduction. Fortunately that was the only part that stuck in my memory so I still upvoted!

I think I’ve got a proof somewhere that SIA (combined with the Self Sampling Assumption, ie the general assumption behind the doomsday argument) has no consequences on future events at all.

(Apart from future events that are really about the past; ie “will tomorrow’s astonomers discover we live in a large universe rather than a small one”).

It seems understressed that the doomsday argument is as an argument about max entropy priors, and that any evidence can change this significantly.

Yes, you should expect with p =

^{2}⁄_{3}to be in the last^{2}⁄_{3}of people alive. Yes, if you wake up and learn that there have only been tens of billions of people alive but expect most people to live in universes that have more people, you can update again and feel a bit relieved.However, once you know how to think straight about the subject, you need to be able to update on the rest of the evidence.

If we’ve never seen an existential threat and would expect to see several before getting wiped out, then we can expect to last longer. However, if we have evidence that there are some big ones coming up, and that we don’t know how to handle them, it’s time to do worry

morethan the doomsday argument tells you to.My paper, Past Longevity as Evidence for the Future, in the January 2009 issue of Philosophy of Science, contains a new refutation to the Doomsday Argument, without resort to SIA.

The paper argues that the Carter-Leslie Doomsday Argument conflates future longevity and total longevity. For example, the Doomsday Argument’s Bayesian formalism is stated in terms of total longevity, but plugs in prior probabilities for future longevity. My argument has some similarities to that in Dieks 2007, but does not rely on the Self-Sampling Assumption.

I’m relatively green on the Doomsday debate, but:

Isn’t this inserting a hidden assumption about what

kindof observers we’re talking about? What definition of “observer” do you get to use, and why? In order to “observe”, all that’s necessary is that you form mutual information with another part of the universe, and conscious entities are a tiny sliver of this set in the observed universe. So the SIA already puts a low probability on the data.I made a similar point before, but apparenlty there’s a flaw in the logic somewhere.

SIA does not require a definition of observer. You need only compare the number of experiences exactly like yours (otherwise you can compare those like yours in some aspects, then update on the other info you have, which would get you to the same place).

SSA requires a definition of observers, because it involves asking how many of those are having an experience like yours.

The debate about what consitutes an “observer class” is one of the most subtle in the whole area (see Nick Bostrom’s book). Technically, SIA and similar will only work as “given this definition of observers, SIA implies...”, but some definitions are more sensible than others.

It’s obvious you can’t seperate two observers with the same subjective experiences, but how much of a difference does there need to be before the observers are in different classes?

I tend to work with something like “observers who think they are human”, or something like that, tweaking the issue of longeveity (does someone who lives 60 years count as the same, or twice as much an observer, as the person who lives 30 years?) as needed in the question.

Okay, but it’s a pretty significant change when you go to “observers who think they are human”. Why should you expect a universe with many of

thatkind of observer? At the very least, you would be conditioning onmorethan just your own existence, but rather, additional observations about your “suit”.As I said, it’s a complicated point. For most of the toy models, “observers who think they are human” is enough, and avoids having to go into these issues.

Not unless you can explain why “universes with many observers who think they are human” are more common than “universes with few observers who think they are human”. Even when you condition on your own existence, you have no reason to believe that most Everett branches have humans.

Er no—they are not more common, at all. The SIA says that you are more likely to be existing in a universe with many humans, not that these universes are more common.

Your TL post said:

And you just replaced “observers” with “observers who think they are human”, so it seems like the SIA does in fact say that universes with many observers who think they are human are more likely than those with few.

Sorry, sloppy language—I meant “you, being an observer, are more likely to exist in a universe with many observers”.

So then the full anthrocentric SIA would be, “you, being an observer that believes you are human, are more likely to exist in a universe with many observers who believe they are human”.

Is that correct? If so, does your proof prove this stronger claim?

Wouldn’t the principle be independent of the form of the observer? If we said “universes with many human observers are more likely than universes with few,” the logic would apply just as well as with matter-based observers or observers defined as mutual-information-formers.

But why is the assumption that universes with human observers are more likely (than those with few) plausible or justifiable? That’s a fundamentally different claim!

I agree that it’s a different claim, and not the one I was trying to make. I was just noting that however one defines “observer,” the SIA would suggest that such observers should be many. Thus, I don’t think that the SIA is inserting a hidden assumption about the type of observers we are discussing.

Right, but my point was that your definition of observer has a big impact on your SIA’s plausibility. Yes, universes with observers

in the general senseare more likely, but why universes with morehumanobservers?Why would being human change the calculus of the SIA? According to its logic, if a universe only has more human observers, there are still more opportunities for me to exist, no?

My point was that the SIA(human) is less

plausible, meaning you shouldn’t base conclusions on it, not that the resulting calculus (conditional on its truth) would be different.That’s what I meant, though: you don’t calculate the probability of SIA(human) any differently than you would for any other category of observer.

Surely the extremes “update on all available information” and “never update on anything” are each more plausible than any mixture like “update on the observation that I exist, but not on the observation that I’m human”.

What bugs me about the doomsday argument is this: it’s a stopped clock. In other words, it always gives the same answer regardless of who applies it.

Consider a bacterial colony that starts with a single individual, is going to live for N doublings, and then will die out completely. Each generation, applying the doomsday argument, will conclude that it has a better than 50% chance of being the final generation, because, at any given time, slightly more than half of all colony bacteria that have ever existed currently exist. The doomsday argument tells the bacteria absolutely nothing about the value of N.

But they’ll be well-calibrated in their expectation—most generations will be wrong, but most individuals will be right.

Woah, Eliezer defends the doomsday argument on frequentist grounds.

So we might well be rejecting something based on long-standing experience, but be wrong because most of the tests will happen in the future? Makes me want to take up free energy research.

Only because of the assumption that the colony is wiped out suddenly. If, for example, the decline mirrors the rise, about two-thirds will be wrong.

ETA: I mean that

^{2}⁄_{3}will apply the argument and be wrong. The other^{1}⁄_{3}won’t apply the argument because they won’t have exponential growth. (Of course they might think some other wrong thing.)They’ll be wrong about the generation part only. The “exponential growth” is needed to move from “we are in the last

^{2}⁄_{3}of humanity” to “we are in the last few generations”. Deny exponential growth (and SIA), then the first assumption is still correct, but the second is wrong.But that’s the important part. It’s called the “Doomsday Argument” for a reason: it concludes that doomsday is imminent.

Of course the last

^{2}⁄_{3}is still going to be^{2}⁄_{3}of the total. So is the first^{2}⁄_{3}.Imminent doomsday is the only non-trivial conclusion, and it relies on the assumption that exponential growth will continue right up to a doomsday.

The fact that every generation gets the same answer doesn’t (of itself) imply that it tells the bacteria nothing. Suppose you have 65536 people and flip a coin 16 [EDITED: for some reason I wrote 65536 there originally] times to decide which of them will get a prize. They can all, equally, do the arithmetic to work out that they have only a

^{1}⁄_{65536}chance of winning. Even the one of them who actually wins. The fact that one of them will in fact win despite thinking herself very unlikely to win is not a problem with this.Similarly, all our bacteria will think themselves likely to be living near the end of their colony’s lifetime. And most of them will be right. What’s the problem?

I think you mean 16 times.

Er, yes. I did change my mind a couple of times about what (2^n,n) pair to use, but I wasn’t ever planning to have 2^65536 people so I’m not quite sure how my brain broke. Thanks for the correction.

The reason all these problems are so tricky is that they assume there’s a “you” (or a “that guy”) who has a view of both possible outcomes. But since there aren’t the same number of people for both outcomes, it isn’t possible to match up each person on one side with one on the other to make such a “you”.

Compensating for this should be easy enough, and will make the people-counting parts of the problems explicit, rather than mysterious.

I suspect this is also why the doomsday argument fails. Since it’s not possible to define a set of people who “might have had” either outcome, the argument can’t be constructed in the first place.

As usual, apologies if this is already known, obvious or discredited.

At case D, your probability changes from 99% to 50%, because only people who survive are ever in the situation of knowing about the situation; in other words there is a 50% chance that only red doored people know, and a 50% chance that only blue doored people know.

After that, the probability remains at 50% all the way through.

The fact that no one has mentioned this in 44 comments is a sign of an incredibly strong wishful thinking, simply “wanting” the Doomsday argument to be incorrect.

Then put a situation C’ between C and D, in which people who are to be killed will be informed about the situation just before being killed (the survivors are still only told after the fact).

Then how does telling these people something just before putting them to death change anything for the survivors?

The manner in C’ depends on your reference class. If your reference class is everyone, then it remains 99%. If your reference class is survivors, then it becomes 50%.

Which shows how odd and arbitrary reference classes are.

I don’t think it is arbitrary. I responded to that argument in the comment chain here and still agree with that. (I am the same person as user Unknowns but changed my username some time ago.)

In C’, the probability of being behind a blue door remains at 99% (as you wished it to), both for whoever is killed, and for the survivor(s). But the reason for this is that everyone finds out all the facts, and the survivor(s) know that even if the coin flip had went the other way, they would have known the facts, only before being killed, while those who are killed know that they would have known the facts afterward, if the coin flip had went the other way.

Telling the people something just before death changes something for the survivors, because the survivors are told that the other people are told something. This additional knowledge changes the subjective estimate of the survivors (in comparison to what it would be if they were told that the non-survivors are not told anything.)

In case D, on the other hand, all the survivors know that only survivors ever know the situation, and so they assign a 50% probability to being behind a blue door.

I don’t see it. In D, you are informed that 100 people were created, separated in two groups, and each of them had then 50% chance of survival. You survived. So calculate the probability and

P(red|survival)=P(survival and red)/P(survival)=0.005/0.5=1%.

Not 50%.

This calculation is incorrect because “you” are by definition someone who has survived (in case D, where the non-survivors never know about it); had the coin flip went the other way, “you” would have been chosen from the other survivors. So you can’t update on survival in that way.

You do update on survival, but like this: you know there were two groups of people, each of which had a 50% chance of surviving. You survived. So there is a 50% chance you are in one group, and a 50% chance you are in the other.

had the coin flip went the other way, “you” would have been chosen from the other survivorsThanks for explanation. The disagreement apparently stems from different ideas about over what set of possibilities one spans the uniform distribution.

I prefer such reasoning: There is a set of people existing at least at some moment in the history of the universe, and the creator assigns “your” consciousness to one of these people with uniform distribution. But this would allow me to update on survival exactly the way I did. However, the smooth transition would break between E and F.

What you describe, as I understand, is that the assignment is done with uniform distribution not over people ever existing, but over people existing in the moment when they are told the rules (so people who are never told the rules don’t count). This seems to me pretty arbitrary and hard to generalise (and also dangerously close to survivorship bias).

In case of SIA, the uniform distribution is extended to cover the set of hypothetically existing people, too. Do I understand it correctly?

Right, SIA assumes that you are a random observer from the set of all possible observers, and so it follows that worlds with more real people are more likely to contain you.

This is clearly unreasonable, because “you” could not have found yourself to be one of the non-real people. “You” is just a name for whoever finds himself to be real. This is why you should consider yourself a random selection from the real people.

In the particular case under consideration, you should consider yourself a random selection from the people who are told the rules. This is because only those people can estimate the probability; in as much as you estimate the probability, you could not possibly have found yourself to be one of those who are not told the rules.

So, what if the setting is the same as in B or C, except that “you” know that only “you” are told the rules?

That’s a complicated question, because in this case your estimate will depend on your estimate of the reasons why you were selected as the one to know the rules. If you are 100% certain that you were randomly selected out of all the persons, and it could have been a person killed who was told the rules (before he was killed), then your probability of being behind a blue door will be 99%.

If you are 100% certain that you were deliberately chosen as a survivor, and if someone else had survived and you had not, the other would have been told the rules and not you, then your probability will be 50%.

To the degree that you are uncertain about how the choice was made, your probability will be somewhere between these two values.

You could have been one of those who didn’t learn the rules, you just wouldn’t have found out about it. Why doesn’t the fact that this didn’t happen tell you anything?

What is your feeling in the case where the victims are first told they will be killed, then the situation is explained to them and finally they are killed?

Similarly, the survivors are first told they will survive, and then the situation is explained to them.

This is basically the same as C’. The probability of being behind a blue door remains at 99%, both for those who are killed, and for those who survive.

There cannot be a continuous series between the two extremes, since in order to get from one to the other, you have to make some people go from existing in the first case, to not existing in the last case. This implies that they go from knowing something in the first case, to not knowing anything in the last case. If the other people (who always exist) know this fact, then this can affect their subjective probability. If they don’t know, then we’re talking about an entirely different situation.

PS: Thanks for your assiduous attempts to explain your position, it’s very useful.

A rather curious claim, I have to say.

There is a group of people, and you are clearly not in their group—in fact the first thing you know, and the first thing they know, is that you are not in the same group.

Yet your own subjective probability of being blue-doored depends on what they were told just before being killed. So if an absent minded executioner wanders in and says “maybe I told them, maybe I didn’t -I forget” that “I forget” contains the difference between a 99% and a 50% chance of you being blue-doored.

To push it still further, if there were to be two experiments, side by side—world C″ and world X″ - with world X″ inverting the proportion of red and blue doors, then this type of reasoning would put you in a curious situation. If everyone were first told: “you are a survivor/victim of world C″/X″ with 99% blue/red doors”, and then the situation were explained to them, the above reasoning would imply that you had a 50% chance of being blue-doored whatever world you were in!

Unless you can explain why “being in world C″/X″ ” is a permissible piece of info to put you in a different class, while “you are a survivor/victim” is not, then I can walk the above paradox back down to A (and its inverse, Z), and get 50% odds in situations where they are clearly not justified.

I don’t understand your duplicate world idea well enough to respond to it yet. Do you mean they are told which world they are in, or just that they are told that there are the two worlds, and whether they survive, but not which world they are in?

The basic class idea I am supporting is that in order to count myself as in the same class with someone else, we both have to have access to basically the same probability-affecting information. So I cannot be in the same class with someone who does not exist but might have existed, because he has no access to any information. Similarly, if I am told the situation but he is not, I am not in the same class as him, because I can estimate the probability and he cannot. But the order in which the information is presented should not affect the probability, as long as all of it is presented to everyone. The difference between being a survivor and being a victim (if all are told) clearly does not change your class, because it is not part of the probability-affecting information. As you argued yourself, the probability remains at 99% when you hear this.

Let’s simplify this. Take C, and create a bunch of other observers in another set of rooms. These observers will be killed; it is explained to them that they will be killed, and then the rules of the whole setup, and then they are killed.

Do you feel these extra observers will change anything from the probability perspective.

No. But this is not because these observers are told they will be killed, but because their death does not depend on a coin flip, but is part of the rules. We could suppose that they are rooms with green doors, and after the situation has been explained to them, they know they are in rooms with green doors. But the other observers, whether they are to be killed or not, know that this depends on the coin flip, and they do not know the color of their door, except that it is not green.

Actually, strike that—we haven’t reached the limit of useful argument!

Consider the following scenario: the number of extra observers (that will get killed anyway) is a trillion. Only the extra observers, and the survivors, will be told the rules of the game.

Under your rules, this would mean that the probability of the coin flip is exactly 50-50.

Then, you are told you are not an extra observer, and won’t be killed. There are 1/(trillion + 1) chances that you would be told this if the coin had come up heads, and 99/(trillions + 99) chances if the coin had come up tails. So your posteriori odds are now essentially 99% − 1% again. These trillion extra observers have brought you back close to SIA odds again.

When I said that the extra observers don’t change anything, I meant under the assumption that everyone is told the rules at some point, whether he survives or not. If you assume that some people are not told the rules, I agree that extra observers who are told the rules change the probability, basically for the reason that you are giving.

What I have maintained consistently here is that if you are told the rules, you should consider yourself a random selection from those who are told the rules, and not from anyone else, and you should calculate the probability on this basis. This gives consistent results, and does not have the consequence you gave in the earlier comment (which assumed that I meant to say that extra observers could not change anything whether or not people to be killed were told the rules.)

I get that—I’m just pointing out that your position is not “indifferent to irrelevant information”. In other words, if there a hundred/million/trillion other observers created, who are ultimately not involved in the whole coloured room dilema, their existence changes your odds of being red or green-doored, even after you have been told you are not one of them.

(SIA

isindifferent to irrelevant extra observers).Yes, SIA is indifferent to extra observers, precisely because it assumes I was really lucky to exist and might have found myself not to exist, i.e. it assumes I am a random selection from all possible observers, not just real ones.

Unfortunately for SIA, no one can ever find himself not to exist.

I think we’ve reached the limit of productive argument; the SIA, and the negation of the SIA, are both logically coherent (they are essentially just different priors on your subjective experience of being alive). So I won’t be able to convince you, if I haven’t so far. And I haven’t been convinced.

But do consider the oddity of your position—you claim that if you were told you would survive, told the rules of the set-up, and then the executioner said to you “you know those people who were killed—who

nevershared the current subjective experience that you have now, and who aredead—well, before they died, I told them/didn’t tell them...” then your probability estimate of your current state would change depending on what he told these dead people.But you similarly claim that if the executioner said the same thing about the extra observers, then your probability estimate would not change, whatever he said to them.

What about rejecting the assumption that there will be finitely many humans? In the infinite case, the argument doesn’t hold.

But in the finite case it supposedly does. See least convenient possible world.

Similarly, physics as I know it prohibits an infinite number of humans.

Thisworld is inconvenient.Still, I do think R0k0′s point would be enough to discourage the absolute claim of exclusivity quoted.

This is a bit off-topic, but are you the same person as Roko? If not, you should change your name.

The wikipedia on the SIA points out that it is not an assumption, but a theorem or corollary. You have simply shown this fact again. Bostrom probably first named it an assumption, but it is neither an axiom or an assumption. You can derive it from these assumptions:

I am a random sample

I may never have been born

The pdf for the number of humans is idependent of the pdf for my birth order number

I don’t see how the SIA refutes the complete DA (Doomsday Argument).

The SIA shows that a universe with more observers

in your reference classis more likely. This is the set used when “considering myself as a random observer drawn from the space of all possible observers”—it’s not reallyallpossible observers.How small is this set? Well, if we rely on just the argument given here for SIA, it’s very small indeed. Suppose the experimenter stipulates an additional rule: he flips a second coin; if it comes up heads, he creates 10^10 extrea copies of you; if tails, he does nothing. However, these extra copies are not created inside rooms at all. You know you’re not one of them, because you’re in one of the rooms. The outcome of the second coin flip is made known to you. But it clearly doesn’t influence your bet on their doors’ colors, even when it increases the number of observers in your universe 10^8 times, and even though these extra observers are complete copies of your life up to this point, who are only placed in a different situation from you in the last second.

Now, the DA can be reformulated: instead of the set of all humans ever to live, consider the set of all humans (or groups of humans) who would never confuse themselves with one another. In this set the SIA doesn’t apply (we don’t predict that a bigger set is more likely). The DA does apply, because humans from different eras are dissimilar and can be indexed as the DA requires. To illustrate, I expect that if I were taken at any point in my life and instantly placed at some point of Leonardo da Vinci’s life, I would very quickly realize something was wrong.

Presumed conclusion: if humanity does not become extinct totally, expect other humans to be more and more similar to yourself as time passes, until you survive only in a universe inhabited by a Huge Number of Clones

It also appears that I should assign very high probability to the chance that a non-Friendly super-intelligent AI destroys the rest of humanity to tile the universe with copies of myself in tiny life-support bubbles. Or with simulators running my life up to then in a loop forever.

Maybe I’m just

reallytired, but I seem to have grown a blind spot hiding a logical step that must be present in the argument given for SIA. It doesn’t seem to be arguingforthe SIA at all, just for the right way of detecting a blue door independent of the number of observers.Consider this variation: there are 150 rooms, 149 of them blue and 1 red. In the blue rooms, 49 cats and 99 human clones are created; in the red room, a human clone is created. The experiment then proceeds in the usual way (flipping the coin and killing inhabitants of rooms of a certain color).

The humans will still give a .99 probability of being behind a blue door, and 99 out of 100 equally-probable potential humans will be right. Therefore you are more likely to inhabit a universe shared by an equal number of humans and cats, than a universe containing only humans (the Feline Indication Argument).

If you are told that you are in that situation, then you would assign a probability of

^{50}⁄_{51}of being behind a blue door, and a^{1}⁄_{51}probability of being behind a red door, because you would not assign any probability to the possibility of being one of the cats. So you will not give a probability of .99 in this case.Fixed, thanks. (I didn’t notice at first that I quoted the .99 number.)

As we are discussing SIA, I’d like to remind about counterfactual zombie thought experiment:

This shows that inference “I think therefore I exist” is, in general, invalid. You can’t

updateon your own existence (although you can use more specific info as parameters in your strategy).Rather, you should look at yourself as an implication: “

IfI exist in this situation,thenmy actions are as I now decide”.No. It just means you are a simulation. These are very different things. “I think therefore I am” is still deductively valid (and really, do you want to give the predicate calculus that knife in the back?). You might not be what you thought you were but all “I” refers to is the originator of the utterance.

Remember: there was no simulation, only prediction. Distinction with a difference.

Then if you take the money Omega was just wrong. Full stop. And in this case if you take the dollar expected gain is a dollar.

Or else you need to clarify.

Assuming that you won’t actually take the money, what would a

planto take the money mean? It’s a kind of retroactive impossibility, where among two options one is impossible not because you can’t push that button, but because you won’t be there to push it. Usual impossibility is just additional info for the could-should picture of the game, to be updated on, so that you exclude the option from consideration. This kind of impossibility is conceptually trickier.I don’t see how my non-existence gets implied. Why isn’t a plan to take the money either a plan that will fail to work (you’re arm won’t respond to your brain’s commands, you’ll die, you’ll tunnel to the Moon etc.) or a plan that would imply Omega was wrong and shouldn’t have made the offer?

My existence is already posited one you’ve said that Omega has offered me this deal. What happens after that bears on whether or not Omega is correct and what properties I have (i.e.

whatI am).There exists (x) &e there exists (y) such that Ox & Iy & ($xy <--> N$yx)

Where O= is Omega, I= is me, $= offer one dollar to, N$= won’t take dollar from. I don’t see how one can take that, add new information, and conclude ~ there exists (y).

I don’t get it, I have to admit. All the experiment seems to be saying is that “if I take the $1, I exist only as a short term simulation in Omega’s mind”. It says you don’t exist as a long-term seperate individual, but doesn’t say you don’t exist in this very moment...

Simulation is a very specific form of prediction (but the most intuitive, when it comes to prediction of difficult decisions). Prediction doesn’t imply simulation. At this very moment I predict that you will choose to NOT cut your own hand off with an axe when asked to, but I’m not simulating you.

In that case (I’ll return to the whole simulation/prediction issue some other time), I don’t follow the logic at all. If Omega offers you that deal, and you take the money, all that you have shown is that Omega is in error.

But maybe its a consequence of advanced decision theory?

That’s the central issue of this paradox: the part of the scenario before you take the money

canactually exist, but if you choose to take the money, it follows that it doesn’t. The paradox doesn’t take for granted that the described scenario does take place, it describes what happens (could happen) from your perspective, in a way in which you’d plan your own actions, not from the external perspective.Think of your thought process in the case where in the end you decide not to take the money: how you

considertaking the money, and what that action would mean (that is, what’s its effect in the generalized sense of TDT, like the effect of you cooperating in PD on the other player or the effect of one-boxing on contents of the boxes). I suggest that the planned action of taking the money means that you don’t exist in that scenario.I see it, somewhat. But this sounds a lot like “I’m Omega, I am trustworthy and accurate, and I will only speak to you if I’ve predicted you will not imagine a pink rhinoceros as soon as you hear this sentence”.

The correct conclusion seems to be that Omega is not what he says he is, rather than “I don’t exist”.

The decision diagonal in TDT is a simple computation (at least, it looks simple assuming large complicated black-boxes, like a causal model of reality) and there’s no particular reason that equation can only execute in sentient contexts. Faced with Omega in this case, I take the $1 - there is no reason for me not to do so—and conclude that Omega incorrectly executed the equation in the context outside my own mind.

Even if we suppose that “cogito ergo sum” presents an extra bit of evidence to me, whereby I truly know that I am the “real” me and not just the simple equation in a nonsentient context, it is still easy enough for Omega to simulate that equation plus the extra (false) bit of info, thereby recorrelating it with me.

If Omega really follows the stated algorithm for Omega, then the decision equation never executes in a sentient context. If it executes in a sentient context, then I know Omega wasn’t following the stated algorithm. Just like if Omega says “I will offer you this $1 only if 1 = 2” and then offers you the $1.

When the problem contains a self-contradiction like this, there is not actually one “obvious” proposition which must be false. One of them must be false, certainly, but it is not possible to derive which one from the problem statement.

Compare this problem to another, possibly more symmetrical, problem with self-contradictory premises:

http://en.wikipedia.org/wiki/Irresistible_force_paradox

This shows that inference “I think therefore I exist” is, in general, invalid. You can’t update on your own existence (although you can use more specific info as parameters in your strategy).

Rather, you should look at yourself as an implication: “If I exist in this situation, then my actions are as I now decide”.

This might be a dumb question, but couldn’t the inference of your existence be valid AND bring with it the implication that your actions are as you decide?

After all, if you begin thinking of yourself as an inference, and you think to yourself, “Well, now, IF I exist, THEN yadda yadda...”—I mean, Don’t you exist at that point?

If non-existence is a negative, then you must be existant if you’re thinking anything at all. A decision cannot be made by nothing, right?

If Omega is making you an offer, Omega is validating your existence. Why would Omega, or anyone ask a question and expect a reply from something that doesn’t exist? You can also prove to yourself you exist as you consider the offer because you are engaged in a thinking process.

It feels more natural to say “I think, therefore I exist, and my actions are as I now decide.”

That said, I don’t think anyone can decide themselves out of existence LoL. As far as we know, energy is the only positive in the universe, and it cannot be destroyed, only transformed. So if your conciousness is tied to the structure of the matter you are comprised of, which is a form of energy, which is a positive, then it cannot become a negative, it can only transform into something else.

Maybe the whole “quantum observer” thing can explain why you DO die/disappear: Because if Omega gave you a choice, and you chose to no longer exist, Omega is “forced”, if you will, to observe your decision to cease existence. It’s part of the integrity of reality, I guess—existence usually implies free will AND it implies that you are a constant observer of the universe. If everything in the universe is made of the same thing you are, then everything else should also have the same qualities as you.

Every other positive thing has free will and is a constant observer. With this level playing field, you really have no choice but to accept your observations of the decisions that others make, and likewise others have no choice but to accept whatever decisions you make when they observe you.

So as the reality of your decision is accepted by Omega—Omega perceives you as gone for good. And so does anyone else who was

observing. But somehow you’re still around LoLMaybe that explains ghosts??? lol ;-D I know that sounds all woo-woo but the main point is this: it’s very hard to say that you can choose non-existence if you are a positive, because so far as we know, you can’t undo a positive.

(It reminds me of something Ayn Rand said that made me raise an eyebrow at the whole Objectivism thing: She said you can’t prove a negative and you can’t disprove a positive. I always thought it was the other way around: You can’t disprove a negative (you can’t destroy something that doesn’t exist), and you can’t prove a positive (it’s fallacious to attempt to prove the existence of an absolute, because the existence of an absolute is not up for debate!).

Ayn Rand’s statements were correct without being “true” somehow. You can’t prove a negative because if you could it would be a positive. Where as you can’t disprove a negative because if you COULD disprove a negative you would just end up with a double-negative?? Whaaat???

LOL Whatever, don’t listen to me : D

mmm to clarify that last point a little bit:

If disproving a negative was possible (meaning that disproving a negative could turn it into a positive) that would be the same as creating something out of nothing. It still violates the Law of Conservation of Energy, because the law states that you cannot create energy (can’t turn a negative into a positive)

<3

You keep using that word. I do not think it means what you think it means.

The doomsday assumption makes the assumptions that:

We are randomly selected from all the observers who will ever exist.

The observers increase expoentially, such that there are

^{2}⁄_{3}of those who have ever lived at any particular generationThey are wiped out by a catastrophic event, rather than slowly dwindling or other

(Now those assumptions are a bit dubious—things change if for instance, we develop life extension tech or otherwise increase rate of growth, and a higher than

^{2}⁄_{3}proportion will live in future generations (eg if the next generation is immortal, they’re guaranteed to be the last, and we’re much less likely depending on how long people are likely to survive after that. Alternatively growth could plateau or fluctuate around the carrying capacity of a planet if most potential observers never expand beyond this) However, assuming they hold, I think the argument is valid.I don’t think your situation alters the argument, it just changes some of the assumptions. At point D, it reverts back to the original doomsday scenario, and the odds switch back.

At D, the point you’re made aware, you know that you’re in the proportion of people who live. Only 50% of the people who ever existed in this scenario learn this, and 99% of them are blue-doors. Only looking at the people at this point is changing the selection criteria—you’re only picking from survivors, never from those who are now dead despite the fact that they are real people we could have been. If those could be included in the selection (as they are if you give them the information and ask them

beforethey would have died), the situation would remain as in A-C.Making creating the losing potential people makes this more explicit. If we’re randomly selecting from people who ever exist, we’ll only ever pick those who get created, who will be predominantly blue-doors if we run the experiment multiple times.

Actually, it requires that we be selected from a small subset of these observers, such as “humans” or “conscious entities” or, perhaps most appropriate, “beings capable of reflecting on this problem”.

Well, for the numbers to work out, there would have to be a sharp drop-off before the slow-dwindling, which is roughly as worrisome as a “pure doomsday”.

Then what about introducing a C’ between C and D: You are told the initial rules. Then, later you are told about the killing, and then, even later, that the killing had already happened and that you were spared.

What would you say the odds were there?

Thinking this through a bit more, you’re right—this really makes no difference. (And in fact, re-reading my post, my reasoning is rather confused - I think I ended up agreeing with the conclusion while also (incorrectly) disagreeing with the argument.)

Might it make a difference that in scenario F, there is an

actualprocess (namely, the coin toss) which could have given rise to the alternative outcome? Note the lack of any analogous mechanism for “bringing into existence” one out of all the possible worlds. One might maintain that this metaphysical disanalogy also makes an epistemic difference. (Compare cousin_it’s questioning of a uniform prior across possible worlds.)In other words, it seems that one could consistently maintain that self-indication principles only hold with respect to possibilities that were “historically possible”, in the sense of being counterfactually dependent on some actual “chancy” event. Not all possible worlds are historically possible in this sense, so some further argument is required to yield the SIA in full generality.

(You may well be able to provide such an argument. I mean this comment more as an invitation than a criticism.)

This is a standard objection, and one that used to convince me. But I really can’t see that F is different from E, and so on down the line. Where exactly does this issue come up? Is it in the change from E to F, or earlier?

No, I was suggesting that the difference is between F and SIA.

Ah, I see. This is more a question about the exact meaning of probability; ie the difference between a frequentist approach and a Bayesian “degree of belief”.

To get a “degree of belief” SIA, extend F to G: here you are simply told that one of two possible universes happened (A and B), in which a certain amount of copies of you were created. You should then set your subjective probability to 50%, in the absence of other information. Then you are told the numbers, and need to update your estimate.

If your estimates for G differ from F, then you are in the odd position of having started with a 50-50 probability estimate, and then updating—but if you were ever told that the initial 50-50 comes from a coin toss rather than being an arbitrary guess, then you would have to change your estimates!

I think this argument extends it to G, and hence to universal SIA.

Thanks, that’s helpful. Though intuitively, it doesn’t seem so unreasonable to treat a credal state due to

knowledge of chancesdifferently from one that instead reflectstotal ignorance. (Even Bayesians want some way to distinguish these, right?)What do you mean by “knowledge of chances”? There is no inherent chance or probability in a coin flip. The result is deterministically determined by the state of the coin, its environment, and how it is flipped. The probability of .5 for heads represents your own ignorance of all these initial conditions and your inability, even if you had all that information, to perform all the computation to reach to logical conclusion of what the result will be.

I’m just talking about the difference between, e.g., knowing that a coin is fair, versus not having a clue about the properties of the coin and its propensity to produce various outcomes given minor permutations in initial conditions.

By “a coin is fair”, do you mean that if we considered all the possible environments in which the coin could be flipped (or some subset we care about), and all the ways the coin could be flipped, then in half the combinations the result will be heads, and in the other half the result will be tails?

Why should that matter? In the actual coin flip whose result we care about, the whole system is not “fair”, there is one result that it definitely produces, and our probabilities just represent our uncertainty about which one.

What if I tell you the coin is not fair, but I don’t have any clue which side it favors? Your probability for the result of heads is still .5, and we still reach all the same conclusions.

For one thing, it’ll change how we update. Suppose the coin lands heads ten times in a row. If we have independent knowledge that it’s fair, we’ll still assign 0.5 credence to the next toss. Otherwise, if we began in a state of pure ignorance, we might start to suspect that the coin is biased, and so have difference expectations.

That is true, but in the scenario, you never learn the result of a coin flip to update on. So why does it matter?

Final edit: I now understand that the argument in the article is correct (and p=.99 in all scenarios). The formulation of the scenarios caused me some kind of cognitive dissonance but now I no longer see a problem with the correct reading of the argument. Please ignore my comments below. (Should I delete in such cases?)I don’t understand what precisely is wrong with the following intuitive argument, which contradicts the p=.99 result of SIA:

In scenarios E and F, I first wake up after the other people are killed (or not created) based on the coin flip. No-one ever wakes up and is killed later. So I am in a blue room if and only if the coin came up heads (and no observer was created in the red room). Therefore P(blue)=P(heads)=0.5, and P(red)=P(tails)=0.5.

Edit:I’m having problems wrapping my head around this logic… Which prevents me from understanding all the LW discussion in recent months about decision theories, since it often considers such scenarios. Could someone give me a pointer please?Before the coin is flipped and I am placed in a room, clearly I should predict P(heads)=0.5. Afterwards, to shift to P(heads)=0.99 would require updating on the evidence that

I am alive. How exactly can I do this if I can’t ever update on the evidence that I am dead? (This is the scenario where no-one is ever killed.)I feel like I need to go back and spell out formally what constitutes legal Bayesian evidence. Is this written out somewhere in a way that permits SIA (my own existence as evidence)? I’m used to considering only evidence to which there could possibly be alternative evidence that I did

notin fact observe. Please excuse a rookie as these must be well understood issues.There’s nothing wrong with this argument. In E and F (and also in D in fact), the probability is indeed 50%.

How would you go about betting on that?

If I were actually in situation A, B, or C, I would expect a 99% chance of a blue door, and in D, E, or F, a 50%, and I would actually bet with this expectation.

There is really no practical way to implement this, however, because of the assumption that random events turn out in a certain way, e.g. it is assumed that there is only a 50% chance that I will survive, yet I always do, in order for the case to be the one under consideration.

Omega runs 10,000 trials of scenario F, and puts you in touch with 100 random people still in their room who believe there is a %50 chance they have red doors, and will happily take 10 to 1 bets that they are.

You take these bets, collect $1 each from 98 of them, and pay out $10 each to 2.

Were their bets rational?

You assume that the 100 people have been chosen randomly from all the people in the 10,000 trials. This is not valid. The appropriate way for these bets to take place is to choose one random person from one trial, then another random person from another trial, and so on. In this way about 50 of the hundred persons will be behind red doors.

The reason for this is that if I know that this setup has taken place 10,000 times, my estimate of the probability that I am behind a blue door will not be the same as if the setup has happened only once. The probability will slowly drift toward 99% as the number of trials increases. In order to prevent this drift, you have to select the persons as stated above.

If you find yourself in such a room, why does your blue door estimate go up with the number of trials you know about? Your coin was still 50-50.

How much does it go up for each additional trial? ie what are your odds if omega tells you you’re in one of two trials of F?

The reason is that “I” could be anyone out of the full set of two trials. So: there is a 25% chance there both trials ended with red-doored survivors; a 25% chance that both trials ended with blue-doored survivors; and a 50% chance that one ended with a red door, one with a blue.

If both were red, I have a red door (100% chance). If both were blue, I have a blue door (100% chance). But if there was one red and one blue, then there are a total of 100 people, 99 blue and one red, and I could be any of them. So in this case there is a 99% chance I am behind a blue door.

Putting these things together, if I calculate correctly, the total probability here (in the case of two trials) is that I have a 25.5% chance of being behind a red door, and a 74.5% chance of being behind a blue door. In a similar way you can show that as you add more trials, your probability will get ever closer to 99% of being behind a blue door.

You could only be in one trial or the other.

What if Omega says you’re in the second trial, not the first?

Or trial 3854 of 10,000?

“I could be any of them” in the sense that all the factors that influence my estimate of the probability, will influence the estimate of the probability made by all the others. Omega may tell me I am in the second trial, but he could equally tell someone else (or me) that he is in the first trial. There are still 100 persons, 99 behind blue doors and 1 behind red, and in every way which is relevant, I could be any of them. Thinking that the number of my trial makes a difference would be like thinking that if Omega tells me I have brown eyes and someone else has blue, that should change my estimate.

Likewise with trial 3854 out of 10,000. Naturally each person is in one of the trials, but the persons trial number does not make a significant contribution to his estimate. So I stand by the previous comments.

These factors should not influence your estimation of the probability, because you could not be any of the people in the other trials, red or blue, because you are only in your trial. (and all of those people should know they can’t be you)

The only reason you would take the trials together as an aggregate is if you were betting on it from the outside, and the person you’re betting against could be in any of the trials.

Omega could tell you the result of the other trials, (1 other or 9999 others,) you’d know exactly how many reds and blues there are, except for your trial. You must asses your trial in the same way you would if it were stand alone.

What if Omega says you are in the most recent trial of 40, because Omega has been running trials every hundred years for 4000 years? You can’t be any of those people. (to say nothing of other trials that other omegas might have run.)

But you could be any of 99 people if the coin came up heads.

If Omega does not tell me the result of the other trials, I stand by my point. In effect he has given me no information, and I could be anyone.

If Omega does tell me the results of all the other trials, it is not therefore the case that I “must assess my trial in the same way as if it stood alone.” That depends on how Omega selected me as the one to estimate the probability. If in fact Omega selected me as a random person from the 40 trials, then I should estimate the probability by estimating the number of persons behind blue door and red doors, and assuming that I could with equal probability have been any of them. This will imply a very high probability of being behind a blue door, but not quite 99%.

If he selected me in some other way, and I know it, I will give a different estimate.

If I do not know how he selected me, I will give a subjective estimate depending on my estimate of ways that he might have selected me; for example I might assign some probability to his having deliberately selected me as one of the red-doored persons, in order to win if I bet. There is therefore no “right” probability in this situation.

How is it the case that you could be in the year 1509 trial, when it is in fact 2009? (omega says so)

Is it also possible that you are someone from the quite likely 2109 trial? (and so on into the future)

I was thinking he could tell every created person the results of all the other trials. I agree that if your are selected for something (information revelatiion, betting, whatever), then information about how you were selected could hint at the color of your door.

Information about the results of any other trials tells you nothing about your door.

If he tells every person the results of all the other trials, I am in effect a random person from all the persons in all the trials, because everyone is treated equally. Let’s suppose there were just 2 trials, in order to simplify the math. Starting with the prior probabilities based on the coin toss, there is a 25% chance of a total of just 2 observers behind red doors, in which case I would have a 100% chance of being behind a red door. There is a 50% chance of 1 observer behind a red door and 99 observers behind blue doors, which would give me a 99% chance of being behind a blue door. There is a 25% chance of 198 observers behind blue doors, which would give me a 100% chance of being behind a blue door. So my total prior probabilities are 25.5% of being behind a red door, and 74.5% of being behind a blue door.

Let’s suppose I am told that the other trial resulted in just one observer behind a red door. First we need the prior probability of being told this. If there were two red doors (25% chance), there would be a 100% chance of this. If there were two blue doors (25% chance), there would be a 0% chance of this. If there was a red door and a blue door (50% chance), there would be a 99% chance of this. So the total prior probability of being told that the other trial resulted in a red door is again 74.5%, and the probability of being told that the other trial resulted in a blue door is 25.5%.

One more probability: given that I am behind a red door, what is the probability that I will be told that the other trial resulted in an observer behind a red door? There was originally a 25% chance of two red trials, and a 50% chance of 1 red and 1 blue trial. This implies that given that I am behind a red door, there is a

^{1}⁄_{3}chance that I will be told that the other trial resulted in red, and a^{2}⁄_{3}that I will be told that the other trial resulted in blue. (Once again things will change if we run more trials, for similar reasons, because in the^{1}⁄_{3}case, there are 2 observers behind red doors.)Applying Bayes’ theorem, then, the probability that I am behind a red door given that I am told that the other trial resulted in an observer behind a red door, is (.255 / .745) x (1/3) = approximately 11.4%. So the probability that I am behind a blue door is approximately 88.6%. Since it was originally only 74.5% with two trials, information about the other trial did contribute to knowledge of my door. The same will happen as you add more trials and more information.

Well you very nearly ruined my weekend. :-)

I admit I was blind sided by the possibility that information about the other trials could yield information about your door. I’ll have to review the monty hall problem.

Using your methods, I got:

Being blue given told red=(.745 being blue prior/.745 told red prior) x (2/3 told red given blue)=.666...

Which doesn’t match your 11.4%, so something is missing.

In scenario F, if you’re not told, why assume that your trial was the only one in the set? You should have some probability that the omegas would do this more than once.

Also, I agree that in theory you would have some subjective probability that there were other trials. But this prevents assigning any exact value to the probability because we can’t make any definitively correct answer. So I was assuming that you either know that the event is isolated, or you know that it is not, so that you could assign a definite value.

I’m not sure what it would mean for the event to be isolated. (Not to contradict my previous statement that you have to treat it as a stand alone event. My position is that it is .99 for any number of trials, though I still need to digest your corrected math.)

I’m not sure how different an event could be before you don’t need to consider it part of the set you could have found yourself in.

If you’re in a set of two red-blue trials, and omegas says there is another set of orange-green trials run the same and likewise told about the red-blues, then it seems you would need to treat that as a set of 4.

If you know you’re in a trial with the (99 blue or 1 red) protocol, but there is also a trial with a (2 blue or 1 red) protocol, then those 1 or 2 people will skew your probabilities slightly.

If Omega tells you there is an intelligent species of alien in which male conceptions yield 99 identical twins and female conceptions only 1, with a .50 probability of conceiving female, and in which the young do not know their gender until maturity… then is that also part of the set you could have been in? If not, I’m honestly not sure where to draw the line. If so, then there I’d expect we could find so many such situations that apply to how individual humans come to exist now, so there may be billions of trials.

You’re correct, I made a serious error in the above calculations. Here are the corrected results:

Prior probability for situation A, namely both trials result in red doors: .25; Prior probability for situation B, namely one red and one blue: .50; Prior probability for situation C, namely both trials result in blue doors: .25; Prior probability for me getting a blue door: .745; Prior probability for me getting a red door: .255; Prior probability of the other trial getting red: .745; Prior probability of the other trial getting blue: .255;

Then probability of situation A, given I have a red door = (Pr(A)/Pr(red)) x P(red given A). Pr(red given A)=1, so the result is pr(A given red) = .25/.255 = .9803921...

So the probability that I will be told red, given I have red, is not

^{1}⁄_{3}, but over 98% (namely the same value above)! And so the probability that I will be told blue, given I have red, is of course .01960784, namely the probability of situation B given that I have a red door.So using Bayes’ theorem with the corrected values, the probability me having a red door, given that I am told the other resulted in red = (pr being red/ pr other red) x pr (told red given red) = (.255/.745) x .9803921… = .33557… or approximately

^{1}⁄_{3}.You can work out the corresponding calculation (probability of being blue given told red) by starting with the probability of situation C given I have a blue door, and then deriving the probability of B given I have a blue, and you will see that it matches this one (i.e. it will be approximately

^{2}⁄_{3}.)Thanks! I think this comment is the best so far for demonstrating the confusion (well, I was confused :-) about the different possible meanings of the phrase “you are an observer chosen from such and such set”. Perhaps a more precise and unambiguous phrasing could be used.

Clearly the bets would not be rational.

This reinforces my feeling that something is deeply wrong with the statement of the problem, or with my understanding of it. It’s true that

some random survivoris p=.99 likely to be behind a blue door. It does not seem true forme, given that I survive.Replace death with the light in the room being shut off.

That’s not applicable to scenarios E and F which is where I have a problem. The observers there never wake up or are never created (depending on the coin toss), I can’t replace that with a conscious observer and the light going off.

Whereas in scenarios A through D, you don’t need SIA to reach the (correct) p=.99 conclusion, you don’t even need the existence of observers other than yourself. Just reformulate as: I was moved to a room at random; the inhabitants of some rooms, if any, were killed based on a coin flip; etc.

Do it anyway. Take a scenario in which the light is shut off while you are sleeping, or never turned on. What does waking up with the lights on (or off) tell you about the color of the door?

Even in A thru D, the dead can’t update.

The state of the lights tells me nothing about the color of the door. Whatever color room I happen to be in, the coin toss will turn my lights on or off with 50% probability.

I don’t see what you intend me to learn from this example...

That dead or alive you are still most likely behind a blue door. You can use the lights being on as evidence just as well as your being alive.

That in B through D you are already updating based on your continued existence.

Beforehand you would expect a 50% chance of dying. Later, If you are alive, then the coin probably came up heads. In E and F, You wake up, You know the coin flip is in your past, You know that most ‘survivors’ of situations like this come out of blue doors.

If you play Russian roulette and survive, you can have a much greater than

^{5}⁄_{6}confidence that the chamber wasn’t loaded.You can be very certain that you have great grandparents, given only your existence and basic knowledge about the world.

In E-F this is not correct. Your words “dead or alive” simply don’t apply: the dead observers never

werealive (and conscious) in these scenarios. They were created and then destroyed without waking up. There is no possible sense in which “I” could be one of them; I am by definition alive now or atleastwere alive at some point in the past. Even under the assumptions of the SIA, a universe with potential observers that never actually materialize isn’t the same as one with actual observers.I still think that in E-F, I’m equally likely to be behind a blue or a red door.

Correct. The crucial difference is that in B-D I could have died but didn’t. In other Everett branches where the coin toss went the other way I did die. So I can talk about the probability of the branch where I survive, and update on the fact that I did survive.

But in E-F I could never have died! There is no branch of possibility where

anyconscious observer has died in E-F. That’s why no observer can update on being alive there; they are all alive with p=1.Yes, because in our world there are people who fail to have grandchildren, and so there are potential grandchildren who don’t actually come to exist.

But in the world of scenarios E and F there is no one who fails to exist and to leave a “descendant” that is himself five minutes later...

I now understand that the argument in the article is correct (and p=.99 in all scenarios). The formulation of the scenarios caused me some kind of cognitive dissonance but now I no longer see a problem with the correct reading of the argument. Please ignore my comments below. (Should I delete in such cases?)

I wouldn’t delete, if nothing else it serves as a good example of working through the dissonance.

editIt would also be helpful if you explained from your own perspective why you changed your mind.Second James’s preference and note that I find it useful as a reader to see an edit note of some sort in comments that are no longer supported.

I now understand that the argument in the article is correct (and p=.99 in all scenarios). The formulation of the scenarios caused me some kind of cognitive dissonance but now I no longer see a problem with the correct reading of the argument.

I’m not sure about the transition from A to B; it implies that, given that you’re alive, the probability of the coin having come up heads was 99%. (I’m not saying it’s wrong, just that it’s not immediately obvious to me.)

The rest of the steps seem fine, though.

Pr(heads|alive) / Pr(tails|alive) = {by Bayes} Pr(alive|heads) / Pr(alive|tails) = {by counting} (99/100) / (1/100) = {by arithmetic} 99, so Pr(heads|alive) =

^{99}⁄_{100}. Seems reasonable enough to me.At B, if tails comes up (p=0.5) there are no blues—if heads comes up (p=0.5) there are no reds. So, depending only on the coin, with equal probability you will be red or blue.

It’s not unreasonable that the probability should change—since it initially depended on the number of people who were created, it should later depend on the number of people who were destroyed.

It doesn’t matter how many observers are in either set if all observers in a set experience the same consequences.

(I think. This is a tricky one.)

Existence is relative: there is a fact of the matter (or rather: procedure to find out) about which things exist where

relative to me, for example in the same room, or in the same world, but this concept breaks down when you ask about “absolute” existence. Absolute existence is inconsistent, as everything goes. Relative existence ofyourselfis a trivial question with a trivial answer.(I just wanted to state it simply, even though this argument is a part of a huge standard narrative. Of course, a global probability distribution can try to represent this relativity in its conditional forms, but it’s a rather contrived thing to do.)

Wha?

In the sense that “every mathematical structure exists”, the concept of “existence” is trivial, as from it follows every “structure”, which is after a fashion a definition of inconsistency (and so seems to be fair game for informal use of the term). Of course, “existence” often refers to much more meaningful “existence in the same world”, with reasonably constrained senses of “world”.

How do you know that?

An ensemble-type definition of existence seems more like an attempt to generalize the term than it does an empirical statement of fact. What would it even mean for a mathematical structure to not exist?

Your justification of the SIA requires a uniform prior over possible universes. (If the coin is biased, the odds are no longer 99:1.) I don’t see why the real-world SIA can assume uniformity, or what it even means. Otherwise, good post.

Note the line “weighted according to the probability of that observer existing”.

Imagine flipping a coin twice. If the coin comes heads first, a universe A with one observer is created. If it comes up TH, a universe B with two observers is created, and if it comes up TT, a universe with four observers is created.

From outside, the probabilities are A:1/2, B:1/4, C:1/4. Updating with SIA gives A:1/4, B:1/4, C:1/2.

No uniform priors assumed or needed.

His prior is uniform because uniform is max entropy. If your prior is less than max entropy, you must have had information to update on. What is your information?

No, you don’t get it. The space of possible universes may be continuous instead of discrete. What’s a “uniform” prior over an arbitrary continuous space that has no canonical parameterization? If you say Maxent, why? If you say Jeffreys, why?

It’s possible to have uniform distributions on continuous spaces. It just becomes probability density instead of probability mass.

The reason for max entropy is that you want your distribution to match your knowlege. When you know nothing, thats maxiumum entropy, by definition. If you update on information that you don’t have, you probabilistically screw yourself over.

If you have a hard time drawing the space out and assigning the maxent prior, you can still use the indifference prinicple when asked about the probability of being in a larger universe vs a smaller universe.

Consider “antipredictions”. Say I ask you “is statement X true? (you can’t update on my psychology since I flipped a coin to determine whether to change X to !X). The max entropy answer is

^{50}⁄_{50}and it’s just the indifference principle.If I now tell you that X = “I will not win the lottery if I buy a ticket?” and you know nothing about what ball will come up, just that the number of winning numbers is small and the number of not winning numbers is huge, you decide that it is very likely to be true. We’ve only updated on which distribution we’re even talking about. If you’re too confused to make that jump in a certain case, then don’t.

Or you could just say that for any possible non uniformity, it’s possible that there’s an opposite non uniformity that cancels it out. Whats the direction of the error?

Does that explain any better?

No, it doesn’t. In fact I don’t think you even parsed my question. Sorry.

Let’s simplify the problem: what’s your uninformative prior for “proportion of voters who voted for an unknown candidate”? Is it uniform on (0,1) which is given by maxent? What if I’d asked for your prior of the

squareof this value instead, masking it with some verbiage to sound natural—would you also reply uniform on (0,1)? Those statements are incompatible. In more complex real world situations, how exactly do you choose the parameterization of the model to feed into maxent? I see no general way. See this Wikipedia page for more discussion of this problem. In the end it recommends the Jeffreys rule for use in practice, but it’s not obviously the final word.I see what you’re saying, but I don’t think it matters here. That confusion extends to uncertainty about the nth digit of pi as well-it’s nothing new about different universes. If you put a uniform prior on the nth digit of pi instead of uniform of the square of the nth digit or Jeffreys prior, why don’t you do the same in the case of different universes? What prior do you use?

The point I tried to make in the last comment is that if you’re asked any question, you

startwith the indifference principle. which is uniform in nature, and upon receiving new information, (perhaps the possibility that the original phrasing wasn’t the ‘natural’ way to phrase it, or however you solve the confusion) then you can update. Since the problem never mentioned a method of parameterizing a continuous space of possible universes, it makes me wonder how you can object to assigning uniform priors given this parameterization or even say that he required it.Changing the topic of our discussion, it seems like your comment is also orthogonal to the claim being presented. He basically said “given this discrete set of two possible universes (with uniform prior) this ‘proves’ SIA (worded the first way)”. Given SIA, you know to update on your existence if you find yourself in a continuous space of possible universes, even if you don’t know where to update from.

If continuity of consciousness immortality arguments also hold, then it simply doesn’t matter whether doomsdays are close—your future will avoid those scenarios.

It “doesn’t matter” only to the extent that you care only about your own experiences, and not the broader consequences of your actions. And even then, it still matters, because if the doomsday argument holds, you should still expect to see a lot of OTHER people die soon.

Not if the world avoiding doomsday is more likely than me, in particular, surviving doomsday. I’d guess most futures in which I live have a lot of people like me living too.

SIA self rebuttal.

If many different universes exist, and one of them has infinite number of all possible observers, SIA imply that I must be in it. But if infinite number of all possible observers exist, the condition that I may not be born is not working in this universe and I can’t apply SIA to the Earth’s fate. Doomsday argument is on.

Just taking a wild shot at this one, but I suspect that the mistake is between C and D. In C, you start with an even distribution over all the people in the experiment, and then condition on surviving. In D, your uncertainty gets allocated among the people who have survived the experiment. Once you know the rules, in C, the filter is in your future, and in D, the filter is in your past.

I just wanted to follow up on this remark I made. There is a suble anthropic selection effect that I didn’t include in my original analysis. As we will see, the result I derived applies if the time after is long enough, as in the SIA limit.

Let the amount of time before the killing be T1, and after (until all observers die), T2. So if there were no killing, P(after) = T2/(T2+T1). It is the ratio of the total measure of observer-moments after the killing divided by the total (after + before).

If the 1 red observer is killed (heads), then P(after|heads) = 99 T2 / (99 T2 + 100 T1)

If the 99 blue observers are killed (tails), then P(after|tails) = 1 T2 / (1 T2 + 100 T1)

P(after) = P(after|heads) P(heads) + P(after|tails) P(tails)

For example, if T1 = T2, we get P(after|heads) = 0.497, P(after|tails) = 0.0099, and P(after) = 0.497 (0.5) + 0.0099 (0.5) = 0.254

So here P(tails|after) = P(after|tails) P(tails) / P(after) = 0.0099 (.5) / (0.254) = 0.0195, or about 2%. So here we can be 98% confident to be blue observers if we are after the killing. Note, it is not 99%.

Now, in the relevant-to-SIA limit T2 >> T1, we get P(after|heads) ~ 1, P(after|tails) ~1, and P(after) ~1.

In this limit P(tails|after) = P(after|tails) P(tails) / P(after) ~ P(tails) = 0.5

So the SIA is false.

The crucial step in your argumentation is from A to B. Here you are changing your a-priori probabilities. Counterintuitively, the probability of dying is

not^{1}⁄_{2}.This paradox is known as the Monty Hall Problem: http://en.wikipedia.org/wiki/Monty_Hall_problem

The doomsday example, as phrased, simply doesn’t work.

Only about 5-10% of the ever-lived population is alive now. Thus, if doomsday happened, only about that percentage would see it within our generation. Not 66%. 5-10%.

Maybe20%, if it happened in 50 years or so. The argument fails on its own merits: it assumes that because^{2}⁄_{3}of the ever-human population will see doomsday, we should expect with^{2}⁄_{3}probability to see doomsday, except that means we should also expect (with p=.67) that only 10% of the ever-human population will see doomsday. This doesn’t work. Indeed, if we think it’s very likely that^{2}⁄_{3}of the ever-lived will be alive on doomsday, we should be almost certain that we are not among that^{2}⁄_{3}.More generally, the

^{2}⁄_{3}conclusion requires generational population tripling over many generations. This has not happened, and does not appear to be likely to happen. If^{2}⁄_{3}of the ever-lived were alive today, and there were reason to believe that the population would continue to triple generationally, then this argument would begin to make sense. As it is, it simply doesn’t work, even if it sounds really cool.Incidentally, the wikipedia summary of the doomsday argument does not sound anything like this. It says (basically) that we’re probably around the halfway point of ever-lived population. Thus, there probably won’t be too many more people, though such is certainly possible. It does not follow from this that

^{2}⁄_{3}of the ever-lived will be alive for doomsday; it only says that doomsday ought to happen relatively soon, but still probably several generations off.I don’t object to the rest of the reasoning and the following argument, but the paraphrasing of the Doomsday argument is a complete straw man and should be dismissed with a mere googling of “world population growth.” I’m not sure that the logic employed does anything against the actual DA.

“Other things equal” is a huge obstacle for me. Without formalizing “other things equal”, this is a piece of advice, not a theorem to be proved. I accept moving from A->F, but I don’t see how you’ve proved SIA in general.

How do I go about obtaining a probability distribution over all possible universes conditioned on nothing?

How do I get a distribution over universes conditioned on “my” existence? And what do I mean by “me” in universes other than this one?

Nobody really knows, but some people have proposed Kolmogorov complexity as the basis of such a prior. In short, the longer the computer program required to simulate something, the less probable it is. (The choice of which programming language to use is still a problem, though.)

That’s not the only problem. We don’t even know whether our universe is computable, e.g. physical constants can have uncomputable decimal expansions, like Chaitin’s Omega encoded into G. Are you

really damn confidentin assigning this possibility a prior of zero?It amazes me that people will start with some particular prior over universes, then mention offhand that they also give significant probability to simulation from prior universes nearly unrelated to our own (except as much as you generically expect simulators to prefer conditions close to their own). Then, should I believe that most universes that exist are simulations in infinite containing universes (that have room for all simulations of finite universes)? Yudkowsky’s recent “meta crossover” fan fiction touched on this.

Simulation is sexy in the same way that creation by gods used to be. Are there any other bridges that explain our universe in terms of some hidden variable?

How about this: leading up to the big crunch, some powerful engineer (or collective) tweaks the final conditions so that another (particular) universe is born after (I vaguely recall Asimov writing this). Does the idea of universes that restart periodically with information leakage between iterations change in any way our prior for universes-in-which-”we”-exist?

In my opinion, I only exist in this particular universe. Other universes in which similar beings exist are different. So p(universe|me) needs to be fleshed out better toward p(universe|something-like-me-in-that-xyz).

I guess we all realize that any p(universe|...) we give is incredibly flaky, which is my complaint. At least, if you haven’t considered all kinds of schemes for universes inside or caused by other universes, then you have to admit that your estimates could change wildly any time you encounter a new such idea.

I don’t need to. I just need to show that if we do get such a distribution (over possible universes, or over some such subset), then SIA update these probabilities. If we can talk, in anyway, about the relative likelyhood of universe Y versus J, then SIA has a role to play.

SIA makes perfect sense to me, but I don’t see how it negates the doomsday argument at all. Can you explain further?

If the human race ends soon, there will be fewer people. Therefore, assign a lower prior to that. This cancels exactly the contribution from the doomsday argument.

And you get a prior arrived at through rationalization. Prior probability is not for grabs.

Oh, I see. How are we sure it cancels exactly, though?

see Bostrom’s paper

Ah, that makes sense. In retrospect, this is quite simple:

If you have a box of ten eggs, numbered 1 through 10, and a box of a thousand eggs, numbered 1 through 1000, and the eggs are all dumped out on the floor and you pick up one labeled EGG 3, it’s just as likely to have come from the big box as the small one, since they both have only one egg labeled EGG 3.

I don’t buy bostrom’s argument against the presumptuous philosopher though. Does anyone have a better one?

Sure.

Wrong. Your

epistemic situationis no longer the same after the announcement.In a single-run (one-small-world) scenario, the coin has a 50% to come up tails or heads. (In a MWI or large universe with similar situations, it would come up

both, which changes the results. The MWI predictions match yours but don’t back the SIA). Here I assume the single-run case.The prior for the coin result is 0.5 for heads, 0.5 for tails.

Before the killing, P(red|heads) = P(red|tails) = 0.01 and P(blue|heads) = P(blue|tails) = 0.99. So far we agree.

P(red|before) = 0.5 (0.01) + 0.5 (0.01) = 0.01

Afterwards, P’(red|heads) = 0, P’(red|tails) = 1, P’(blue|heads) = 1, P’(blue|tails) = 0.

P(red|after) = 0.5 (0) + 0.5 (1) = 0.5

So after the killing, you should expect either color door to be 50% likely.

This, of course, is exactly what the SIA denies. The SIA is obviously false.

So why does the result seem counterintuitive? Because in practice, and certainly when we evolved and were trained, single-shot situations didn’t occur.

So let’s look at the MWI case. Heads and tails both occur, but each with 50% of the original measure.

Before the killing, we again have P(heads) =P(tails) = 0.5

and P(red|heads) = P(red|tails) = 0.01 and P(blue|heads) = P(blue|tails) = 0.99.

Afterwards, P’(red|heads) = 0, P’(red|tails) = 1, P’(blue|heads) = 1, P’(blue|tails) = 0.

Huh? Didn’t I say it was different? It sure is, because afterwards, we no longer have P(heads) = P(tails) = 0.5. On the contrary, most of the conscious measure (# of people) now resides behind the blue doors. We now have for the effective probabilities P(heads) = 0.99, P(tails) = 0.01.

P(red|after) = 0.99 (0) + 0.01 (1) = 0.01

No;you need to apply Bayes theorem here. Intuitively, before the killing you are 99% sure you’re behind a blue door, and if you survive you should take it as evidence that “yay!” the coin in fact did not land tails (killing blue). Mathematically, you just have to remember to use your old posteriors as your new priors:P(red|survival) = P(red)·P(survival|red)/P(survival) = 0.01·(0.5)/(0.5) = 0.01

So SIA + Bayesian updating happens to agree with the “quantum measure” heuristic in this case.

However, I am with Nick Bodstrom in rejecting SIA in favor of his “Observation Equation” derived from “SSSA”, precisely because that is what maximizes the total wealth of your reference class (at least when you are not choosing whether to exist or create dupcicates).

Why do I get the feeling you’re shouting, Academician? Let’s not get into that kind of contest. Now here’s why you’re wrong:

P(red|before) =0.01

is not equal toP(red).P(red) would be the probability of being in a red room given

no informationabout whether the killing has occured; i.e. no information about what time it is.The killing is not just an information update; it’s a change in the # and proportions of observers.

Since (as I proved) P(red|after) = 0.5, while P(red|before) =0.01, that means that P(red) will depend on how much time there is before as compared to after.

That also means that P(after) depends on the amount of time before as compared to after. That should be fairly clear. Without any killings or change in # of observers, if there is twice as much time after an event X than before, then P(after X) =

^{2}⁄_{3}. That’s the fraction of observer-moments that are after X.I omitted the “|before” for brevity, as is customary in Bayes’ theorem.

Cupholder’s excellent diagram should help make the situation clear. Here is a written explanation to accompany:

R = “you are in a red room”

K = “at some time, everyone in a red/blue room is killed according as a coin lands heads/tails”

H = “the killing has happened”

A = “you are alive”

P(R) means your subjective probability that you are in a red room, before knowing K or H. Once you know all three, by Bayes’ theorem:

P(R|KHA) = P(R)·P(KHA|R)/P(KHA) = 0.01·(0.5)/(0.5) = 0.01

I’d denote that by P(R|KA) -- with no information about H -- and you can check that it indeed equals 0.01. Again, Cupholder’s diagram is an easy way to see this intuitively. If you want a verbal/mathematical explanation, first note from the diagram that the probability of being alive in a red room

beforekillings happen is also 0.01:P(R|K~HA) = #(possible living observers

in red roomsbefore killings)/#(possible living observers before killings) = 0.01So we have P(R|KHA)=P(R|K~HA)=0.01, and therefore by the usual independence trick,

P(R|KA) = P(RH|KA) + P(R~H|KA) = P(H|KA)·P(R|KHA) + P(~H|KA)·P(R|K~HA) = [P(H|KA)+P(~H|KA)]·0.01 = 0.01

So even when you know about a killing, but not whether it has happened, you still believe you are in a red room with probability 0.01.

That is not correct. The prior that is customary in using Bayes’ theorem is the one which applies in the absence of additional information, not before an event that changes the numbers of observers.

For example, suppose we know that x=1,2,or 3. Our prior assigns

^{1}⁄_{3}probability to each, so P(1) =^{1}⁄_{3}. Then we find out “x is odd”, so we update, getting P(1|odd) =^{1}⁄_{2}. That is the standard use of Bayes’ theorem, in which only our information changes.OTOH, suppose that before time T there are 99 red door observers and 1 blue door one, and after time T, there is 1 red door are 99 blue door ones. Suppose also that there is the same amount of lifetime before and after T. If we don’t know what time it is, clearly P(red) =

^{1}⁄_{2}. That’s what P(red) means. If we know that it’s before T, then update on that info, we get P(red|before)=0.99.Note the distinction: “before an event” is not the same thing as “in the absence of information”. In practice, often it is equivalent because we only learn info about the outcome after the event and because the number of observers stays constant. That makes it easy for people to get confused in cases where that no longer applies.

Now, suppose we ask a different question. Like in the case we were considering, the coin will be flipped and red or blue door observers will be killed; and it’s a one-shot deal. But now, there will be a time delay after the coin has been flipped but before any observers are killed. Suppose we know that we are such observers after the flip but before the killing.

During this time, what is P(red|after flip & before killing)? In this case, all 100 observers are still alive, so there are 99 blue door ones and 1 red door one, so it is 0.01. That case presents no problems for your intuition, because it doesn’t involve changes in the #’s of observers. It’s what you get with just an info update.

Then the killing occurs. Either 1 red observer is killed, or 99 blue observers are killed. Either outcome is equally likely.

In the actual resulting world, there is only one kind of observer left, so we can’t do an observer count to find the probabilities like we could in the many-worlds case (and as cupholder’s diagram would suggest). Whichever kind of observer is left, you can only be that kind, so you learn nothing about what the coin result was.

Actually, if we consider that you could have been an observer-moment either before or after the killing, finding yourself to be after it does increase your subjective probability that fewer observers were killed. However, this effect goes away if the amount of time before the killing was very short compared to the time afterwards, since you’d probably find yourself afterwards in either case; and the case we’re really interested in, the SIA, is the limit when the time before goes to 0.

See here

Given that others seem to be using it to get the right answer, consider that you may

rightfully believeSIA is wrong because you have a different interpretation of it, which happens to be wrong.I am using an interpretation that works—that is, maximizes the total utility of equivalent possible observers—given

objectively-equally-likely hypothetical worlds(otherwise it is indeed problematic).That’s correct, and not an issue. In case it appears an issue, the beliefs in the update yielding P(R)=0.01 can be restated non-indexically (with no reference to “you” or “now” or “before”):

R = “person X is/was/will be in a red room”

K = “at some time, everyone in a red/blue room is killed according as a coin lands heads/tails

S = “person X survives/survived/will survive said killing”

Anthropic reasoning just says “reason as if you are X”, and you get the right answer:

1) P(R|KS) = P(R|K)·P(S|RK)/P(S|K) = 0.01·(0.5)/(0.5) = 0.01

If you still think this is wrong, and you want to be prudent about the truth, try finding which term in the equation (1) is incorrect and which possible-observer count makes it so. In your analysis, be sure you only use SIA

onceto declare equal likelihood of possible-observers, (it’s easiest at the beginning), and beexplicitwhen you use it. Then use evidence to constrain which of those equally-likely folk you might actually be, and you’ll find that 1% of them are in red rooms, so SIA gives the right answer in this problem.Cupholder’s diagram, ignoring its frequentist interpretation if you like, is a good aid to count these equally-likely folk.

SIA doesn’t ask you to count observers in the “actual world”. It applies to

objectively-equally-likely hypothetical worlds:http://en.wikipedia.org/wiki/Self-Indication_Assumption

“SIA: Given the fact that you exist, you should (

other things equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist.”Quantitatively, to work properly it say to consider any two observer moments in objectively-equally-likely hypothetical worlds as equally likely. Cupholder’s diagram represents objectively-equally-likely hypothetical worlds in which to count observers, so it’s perfect.

Some warnings:

make sure SIA isn’t the

onlyinformation you use… you have to constrain the set of observers you’re in (your “reference class”), using any evidence like “the killing has happened”.don’t count observers before and after the killing as equally likely—they’re not in objectively-equally-likely hypothetical worlds. Each world-moment before the killing is twice as objectively-likely as the world-moments after it.

Huh? I haven’t been

usingthe SIA, I have been attacking it by deriving the right answer from general considerations (that is, P(tails) =^{1}⁄_{2}for the 1-shot case in the long-time-after limit) and noting that the SIA is inconsistent with it. The result of the SIA is well known—in this case, 0.01; I don’t think anyone disputes that.Dead men make no observations. The equation you gave is fine for before the killing (for guessing what color you will be if

yousurvive), not for after (when the set of observers is no longer the same).So, if you are after the killing, you can only be one of the living observers. This is an anthropic selection effect. If you want to simulate it using an outside ‘observer’ (who we will have to assume is not in the reference class; perhaps an unconscious computer), the equivalent would be interviewing the survivors.

The computer will interview all of the survivors. So in the 1-shot case, there is a 50% chance it asks the red door survivor, and a 50% chance it talks to the 99 blue door ones. They all get an interview because all survivors make observations and we want to make it an equivalent situation. So if you get interviewed, there is a 50% chance that you are the red door one, and a 50% chance you are one of the blue door ones.

Note that if the computer were to interview just

onesurvivor at random in either case, then being interviewed would be strong evidence of being the red one, because if the 99 blue ones are the survivors you’d just have a 1 in 99 chance of being picked. P(red) > P(blue). This modified case shows the power of selection.Of course, we can consider intermediate cases in which N of the blue survivors would be interviewed; then P(blue) approaches 50% as N approaches 99.

The analogous modified MWI case would be for it to interview both the red survivor and one of the blue ones; of course, each survivor has half the original measure. In this case, being interviewed would provide no evidence of being the red one, because now you’d have a 1% chance of being the red one and the same chance of being the blue interviewee. The MWI version (or equivalently, many runs of the experiment, which may be anywhere in the multiverse) negates the selection effect.

If you are having trouble following my explanations, maybe you’d prefer to see what Nick Bostrom has to say. This paper talks about the equivalent Sleeping Beauty problem. The main interesting part is near the end where he talks about his own take on it. He correctly deduces that the probability for the 1-shot case is

^{1}⁄_{2}, and for the many-shot case it approaches^{1}⁄_{3}(for the SB problem). I disagree with his ‘hybrid model’ but it is pretty easy to ignore that part for now.Also of interest is this paper which correctly discusses the difference between single-world and MWI interpretations of QM in terms of anthropic selection effects.

Let me instead ask a simple question: would you actually

betlike you’re in a red room?Suppose you were told the killing had happened (as in the right column of Cupholder’s diagram, and required to guess the color of your room, with the following payoffs:

Guess red correctly → you earn $1.50

Guess blue correctly → you earn $1.00

Guess incorrectly → you are terribly beaten.

Would you guess red? Knowing that under independent repeated or parallel instances of this scenario (although merely hypothetical if you are concerned with the “number of shots”),

“guess heads” mentality typically leads to

large numbers of people (99%) being terribly beaten“guess blue” mentality leads to

large numbers of people (99%) earning $1 and not being beatenthis not an interactive scenario like the Prisoner’s dilemma, which is interactive in a way that renders a sharp distinction between group rationality and individual rationality,

would you still guess “red”? Not me. I would take my survival as evidence that blue rooms were not killed, and guess blue.

If you would guess “blue” for “other reasons”, then we would exhibit the same behavior, and I have nothing more to discuss. At least in this case, our semantically different ways of managing possibilities are resulting in the same decision, which is

what I consider important. You may disagree about this importance, but I apologize that I’m not up for another comment thread of this length.If you would really guess “red”, then I have little more to say than to reconsider your actions, and to again excuse me from this lengthy discussion.

The way you set up the decision is not a fair test of belief, because the stakes are more like $1.50 to $99.

To fix that, we need to make 2 changes:

1) Let us give any reward/punishment to a third party we care about, e.g. SB.

2) The total reward/punishment she gets won’t depend on the number of people who make the decision. Instead, we will poll

all of the survivorsfrom all trials and pool the results (or we can pick 1 survivor at random, but let’s do it the first way).The majority decides what guess to use, on the principle of one man, one vote. That is surely what we want from our theory—for the majority of observers to guess optimally.

Under these rules, if I know it’s the 1-shot case, I should guess red, since the chance is 50% and the payoff to SB is larger. Surely you see that SB would prefer us to guess red in this case.

OTOH if I know it’s the multi-shot case, the majority will be probably be blue, so I should guess blue.

In practice, of course, it will be the multi-shot case. The universe (and even the population of Earth) is large; besides, I believe in the MWI of QM.

The practical significance of the distinction has nothing to do with casino-style gambling. It is more that 1) it shows that the MWI can give different predictions from a single-world theory, and 2) it disproves the SIA.

Is that a “yes” or a “no” for the scenario as I posed it?

I agree. It is only possible to fairly “test” beliefs when a related objective probability is agreed upon, which for us is clearly a problem. So my question remains unanswered, to see if we disagree

behaviorally:That’s not my intention. To clarify, assume that:

the other prisoners’ decisions are totally independent of yours (perhaps they are irrational), so that you can in no sense

effect99 real other people to guess blue and achieve a $99 payoff with only one beating, andthe payoffs/beatings are really to the prisoners, not someone else,

Then, as I said,

in that scenarioI would guess that I’m in a blue room.Would you really guess “red”, or do we agree?

(My “reasons” for blue would be to note that I started out overwhelmingly (99%) likely to be in a blue room, and that my surviving the subsequent coin toss is evidence that it did not land tails and kill blue-roomed prisoners, or equivalently, that counterfactual-typically, people guessing red would result in a great deal of torture. But please forget

why; I just want to know what you woulddo.)That’s wrong; behavioral tests (properly set up) can reveal what people really believe, bypassing talk of probabilities.

Under the strict conditions above and the other conditions I have outlined (long-time-after, no other observers in the multiverse besides the prisoners), then sure, I’d be a fool not to guess red.

But I wouldn’t recommend it to others, because if there are more people, that would only happen in the blue case. This is a case in which the number of observers depends on the unknown, so maximizing expected average utility (which is appropriate for decision theory for a given observer) is not the same as maximizing expected total utility (appropriate for a class of observers).

More tellingly, once I find out the result (and obviously the result becomes known when I get paid or punished), if it is red,

I would not be surprised. (Could be either, 50% chance.)Not that I’ve answered your question, it’s time for you to answer mine: What would you vote, given that the majority of votes determines what SB gets? If you really believe you are probably in a blue room, it seems to me that you should vote blue; and it seems obvious that would be irrational.

Then if you find out it was red, would you be surprised?

So in my scenario, groups of people like you end up with 99 survivors being tortured or 1 not, with equal odds (despite that their actions are independent and non-competitive), and groups of people like me end up with 99 survivors not tortured or 1 survivor tortured, with equal odds.

Let’s say I’m not asserting that means I’m “right”. But consider that your behavior may be more due to a ritual of cognition rather than systematized winning.

You might respond that “rationalists win” is itself a ritual of cognition to be abandoned. More specifically, maybe you disagree that “whatever rationality is, it should fare well-in-total, on average, in non-competitive thought experiments”. I’m not sure what to do about that response.

In your scenario, I’d vote red, because when the (independent!) players do that, her expected payoff is higher. More precisely, if I model the others randomly, me voting red increases the probability that SB lands in world with a majority “red” vote, increasing her expectation.

This may seem strange because I am playing by an Updateless strategy. Yes, in my scenario I act 99% sure that I’m in a blue room, and in yours I guess red, even though they have same assumptions regarding my location. Weird eh?

What’s happening here is that I’m

planning aheadto do what wins, and planning isn’t always intuitively consistent with updating. Check out The Absent Minded Driver for another example where planning typically outperforms naive updating. Here’s another scenario, which involves interactive planning.To be honest with you, I’m not sure how the “surprise” emotion is supposed to work in scenarios like this. It might even be useless. That’s why I base my actions on instrumental reasoning rather than rituals of cognition like “don’t act surprised”.

By the way, you are certainly not the first to feel the weirdness of time inconsistency in optimal decisions. That’s why there are so many posts working on decision theory here.

Under a frequentist interpretation it is not possible for the equation to work pre-killing and yet not work post-killing: if one’s estimate of P(R|KS) = 0.01 is correct, that implies one has correctly estimated the relative frequency of having been red-doored given that one survives the killing. That estimate of the relative frequency cannot then change after the killing, because that is precisely the situation for which the relative frequency was declared correct!

I don’t agree, because in my judgment the greater number of people initially behind blue doors skews the probability in favor of ‘you’ being behind a blue door.

Reading Bostrom’s explanation of the SB problem, and interpreting ‘what should her credence be that the coin will fall heads?’ as a question asking the relative frequency of the coin coming up heads, it seems to me that the answer is

^{1}⁄_{2}however many times Sleeping Beauty’s later woken up: the fair coin will always be tossed after she awakes on Monday, and a fair coin’s probability of coming up heads is^{1}⁄_{2}.In the 1-shot case, the whole concept of a frequentist interpretation makes no sense. Frequentist thinking invokes the many-shot case.

I am surprised you think so because you seem stuck in many-shot thinking, which gives

^{1}⁄_{3}.Maybe you are asking the wrong question. The question is, given that she wakes up on Monday or Tuesday and doesn’t know which, what is her creedence that the coin actually fell heads? Obviously in the many-shot case, she will be woken up twice as often during experiments where it fell tails, so in

^{2}⁄_{3}or her wakeups the coin will be tails.In the 1-shot case that is not true, either she wakes up once (heads) or twice (tails) with 50% chance of either.

Consider the 2-shot case. Then we have 4 possibilities:

coins , days , fraction of actual wakeups where it’s heads

HH , M M , 1

HT , M M T ,

^{1}⁄_{3}TH , M T M ,

^{1}⁄_{3}TT , M T M T , 0

Now P(heads) = (1 +

^{1}⁄_{3}+^{1}⁄_{3}+ 0) / 4 =^{5}⁄_{12}= 0.417Obviously as the number of trials increases, P(heads) will approach

^{1}⁄_{3}.This is assuming that she is the only observer and that the experiments are her whole life, BTW.

This should be a weighted average, reflecting how many coin flips are observed in the four cases:

There are always 2 coin flips, and the results are not known to SB. I can’t guess what you mean, but I think you need to reread Bostrom’s paper.

It seems I was solving an equivalent problem. In the formulation you are using, the weighted average should reflect the number of wakeups.

What this results means is that SB should expect with probabilty

^{1}⁄_{3}, that if she were shown the results of the coin toss, she would observe that the result was heads.No, it shouldn’t—that’s the point. Why would you think it should?

Note that I am already taking observer-counting into account—among observers that actually exist in each coin-outcome-scenario. Hence the fact that P(heads) approaches

^{1}⁄_{3}in the many-shot case.Maybe I misunderstand what the frequentist interpretation involves, but I don’t think the 2nd sentence implies the 1st. If I remember rightly, a frequentist interpretation of probability as long-run frequency in the case of Bernoulli trials (e.g. coin flips) can be justified with the strong law of large numbers. So one can do that mathematically without actually flipping a coin arbitrarily many times, from a definition of a single Bernoulli trial.

My initial interpretation of the question seems to differ from the intended one, if that’s what you mean.

This subtly differs from Bostrom’s description, which says ‘When she awakes on Monday’, rather than ‘Monday or Tuesday.’ I think your description probably better expresses what Bostrom is getting at, based on a quick skim of the rest of Bostrom’s paper, and also because your more complex description makes both of the answers Bostrom mentions (1/2 and

^{1}⁄_{3}) defensible: depending on how I interpret you, I can extract either answer from the one-shot case, because the interpretation affects how I set up the relative frequency.If I count how many times on average the coin comes up heads

per time it is flipped, I must get the answer^{1}⁄_{2}, because the coin is fair.If I count how many times on average the coin comes up heads

per time SB awakes, the answer is^{1}⁄_{3}. Each time I redo the ‘experiment,’ SB has a 50% chance of waking up twice with the coin tails, and a 50% chance of waking up once with the coin heads. So on average she wakes up 0.5×2 + 0.5×1 = 1.5 times, and 0.5×1 = 0.5 of those 1.5 times correspond to heads: hence 0.5/1.5 =^{1}⁄_{3}.I’m guessing that the Bayesian analog of these two possible thought processes would be something like

SB asking herself, ‘if I were the coin, what would I think my chance of coming up heads was whenever I’m awake?’

SB asking herself, ‘from my point of view, what is the coin about to be/was the coin yesterday whenever I wake up?’

but I may be wrong. At any rate, I haven’t thought of a rationale for your 2-shot calculation. Repeating the experiment twice shouldn’t change the relative frequencies—they’re relative! So the 2-shot case should still have

^{1}⁄_{2}or^{1}⁄_{3}as the only justifiable credences.(Edited to fix markup/multiplication signs.)

He makes clear though that she doesn’t know which day it is, so his description is equivalent. He should have written it more clearly, since it can be misleading on the first pass through his paper, but if you read it carefully you should be OK.

‘On average’ gives you the many-shot case, by definition.

In the 1-shot case, there is a 50% chance she wakes up once (heads), and a 50% chance she wakes up twice (tails). They don’t

bothhappen.In the 2-shot case, the four possibilities are as I listed. Now there is both uncertainty in what really happens objectively (the four possible coin results), and then given the real situation, relevant uncertainty about which of the real person-wakeups is the one

she’sexperiencing (upon whichhercoin result can depend).I think I essentially agree with this comment, which feels strange because I suspect we would continue to disagree on a number of the points we discussed upthread!

Saw this come up in Recent Comments, taking the opportunity to simultaneously test the image markup and confirm Academian’s Bayesian answer using boring old frequentist probability. Hope this isn’t too wide… (Edit: yup, too wide. Here’s a smaller-albeit-busier-looking version.)

Cupholder:

That is an excellent illustration … of the many-worlds (or many-trials) case. Frequentist counting works fine for repeated situations.

The one-shot case requires Bayesian thinking, not frequentist. The answer I gave is the correct one, because observers do not gain any information about whether the coin was heads or tails. The number of observers that see each result is not the same, but the only observers that actually see any result afterwards are the ones in either heads-world or tails-world; you can’t count them all as if they all exist.

It would probably be easier for you to understand an equivalent situation: instead of a coin flip, we will use the 1 millionth digit of pi in binary notation. There is only one actual answer, but assume we don’t have the math skills and resources to calculate it, so we use Bayesian subjective probability.

Cupholder managed to find an analogous problem in which the Bayesian subjective probabilities mapped to the same values as frequentist probabilities, so that the frequentist approach really gives the same answer. Yes, it would be nice to just accept subjective probabilities so you don’t have to do that, but the answer Cupholder gave is correct.

The analysis you label “Bayesian”, on the other hand, is incorrect. After you notice that you have survived the killing you should update your probability that coin showed tails to

so you can then calculate

Or, as Academian suggested, you could have just updated to directly find

I disagree, but I am inclined to disagree by default: one of the themes that motivates me to post here is the idea that frequentist calculations are typically able to give precisely the same answer as Bayesian calculations.

I also see no trouble with wearing my frequentist hat when thinking about single coin flips: I can still reason that

ifI flipped a fair coin arbitrarily many times, the relative frequency of a head converges almost surely to one half, and that relative frequency represents my chance of getting a head on a single flip.I believe that the observers who survive would. To clarify my thinking on this, I considered doing this experiment with a trillion doors, where one of the doors is again red, and all of the others blue. Let’s say I survive this huge version of the experiment.

As a survivor, I know I was almost certainly behind a blue door to start with. Hence a tail would have implied my death with near certainty. Yet I’m

notdead, so it is extremely unlikely that I got tails. That means I almost certainly got heads. I have gained information about the coin flip.I think talking about ‘observers’ might be muddling the issue here. We could talk instead about creatures that don’t understand the experiment, and the result would be the same. Say we have two Petri dishes, one dish containing a single bacterium, and the other containing a trillion. We randomly select one of the bacteria (representing me in the original door experiment) to stain with a dye. We flip a coin: if it’s heads, we kill the lone bacterium, otherwise we put the trillion-bacteria dish into an autoclave and kill all of those bacteria. Given that the stained bacterium survives the process, it is far more likely that it was in the trillion-bacteria dish, so it is far more likely that the coin came up heads.

I don’t think of the pi digit process as equivalent. Say I interpret ‘pi’s millionth bit is 0’ as heads, and ‘pi’s millionth bit is 1’ as tails. If I repeat the door experiment many times using pi’s millionth bit, whoever is behind the red door must die, and whoever’s behind the blue doors must survive. And that is going to be the case whether I ‘have the math skills and resources to calculate’ the bit or not. But it’s not going to be the case if I flip fair coins, at least as flipping a fair coin is generally understood in this kind of context.

That would be like repeating the coin version of the experiment many times, using the exact same coin (in the exact same condition), flipping it in the exact same way, in the exact same environment. Even though you don’t know all these factors of the initial conditions, or have the computational power to draw conclusions from it, the coin still lands the same way each time.

Since you are willing to suppose that these initial conditions are different in each trial, why not analogously suppose that in each trial of the digit of pi version of the experiment, that you compute a different digit of pi. or, more generally, that in each trial you compute a different logical fact that you were initially completely ignorant about.?

Yes, I think that would work—if I remember right, zeroes and ones are equally likely in pi’s binary expansion, so it would successfully mimic flipping a coin with random initial conditions. (ETA: this is interesting. Apparently pi’s not yet been shown to have this property. Still, it’s plausible.)

This would also work, so long as your bag of facts is equally distributed between true facts and false facts.

That’s probably why you don’t understand the result; it is an anthropic selection effect. See my reply to Academician above.

That is not an analogous experiment. Typical survivors are not pre-selected individuals; they are post-selected, from the pool of survivors only. The analogous experiment would be to choose one of the surviving bacteria

afterthe killing and then stain it. To stain it before the killing risks it not being a survivor, and that can’t happen in the case of anthropic selection among survivors.That’s because you erroneously believe that your frequency interpretation works. The math problem has only

oneanswer, which makes it a perfect analogy for the 1-shot case.Okay.

I believe that situations

AandBwhich you quote from Stuart_Armstrong’s post involve pre-selection, not post-selection, so maybe that is why we disagree. I believe that because the descriptions of the two situations refer to ‘you’ - that is, me—which makes me construct a mental model of me being put into one of the 100 rooms at random. In that model my pre-selected consciousness is at issue, not that of a post-selected survivor.By ‘math problem’ do you mean the question of whether pi’s millionth bit is 0? If so, I disagree. The 1-shot case (which I think you are using to refer to situation

Bin Stuart_Armstrong’s top-level post...?) describes a situation defined to have multiple possible outcomes, but there’s only one outcome to the question ‘what is pi’s millionth bit?’Presumably you heard the announcement.

This is post-selection, because pre-selection would have been “

Either you are dead, or you hear that whoever was to be killed has been killed. What are your odds of being blue-doored now?”There’s only

oneoutcome in the 1-shot case.The fact that there are multiple “possible” outcomes is irrelevant—all that means is that, like in the math case, you don’t have knowledge of which outcome it is.

The ‘selection’ I have in mind is the selection, at the beginning of the scenario, of the person designated by ‘you’ and ‘your’ in the scenario’s description. The announcement, as I understand it, doesn’t alter the selection in the sense that I think of it, nor does it generate a new selection: it just indicates that ‘you’ happened to survive.

I continue to have difficulty accepting that the millionth bit of pi is just as good a random bit source as a coin flip. I am picturing a mathematically inexperienced programmer writing a (pseudo)random bit-generating routine that calculated the millionth digit of pi and returned it. Could they justify their code by pointing out that

theydon’t know what the millionth digit of pi is, and so they can treat it as a random bit?Not seriously: http://www.xkcd.com/221/

Seriously: You have no reason to believe that the millionth bit of pi goes one way or the other, so you should assign equal probability to each.

However, just like the xkcd example would work better if the computer actually rolled the die for you every time rather than just returning ‘4’, the ‘millionth bit of pi’ algorithm doesn’t work well because it only generates a random bit once (amongst other practical problems).

In most pseudorandom generators, you can specify a ‘seed’ which will get you a fixed set of outputs; thus, you could every time restart the generator with the seed that will output ‘4’ and get ‘4’ out of it deterministically. This does not undermine its ability to be a random number generator. One common way to seed a random number generator is to simply feed it the current time, since that’s as good as random.

Looking back, I’m not certain if I’ve answered the question.

I think so: I’m inferring from your comment that the principle of indifference is a rationale for treating a deterministic-but-unknown quantity as a random variable. Which I can’t argue with, but it still clashes with my intuition that any casino using the millionth bit of pi as its PRNG should expect to lose a lot of money.

I agree with your point on arbitrary seeding, for whatever it’s worth. Selecting an arbitrary bit of pi at random to use as a random bit amounts to a coin flip.

I’d be extremely impressed if a mathematically inexperienced programmer could pull of a program that calculated the millionth digit of pi!

I say yes (assuming they only plan on treating it as a random bit once!)

If ‘you’ were selected at the beginning, then you might not have survived.

Yeah, but the description of the situation asserts that ‘you’ happened to survive.

Adding that condition is post-selection.

Note that “If you (being asked before the killing) will survive, what color is your door likely to be?” is

verydifferent from “Given that you did already survive, …?”. A member of the population to which the first of these applies might not survive. This changes the result. It’s the difference between pre-selection and post-selection.I’ll try to clarify what I’m thinking of as the relevant kind of selection in this exercise. It is true that the condition effectively picks out—that is, selects—the probability branches in which ‘you’ don’t die, but I don’t see that kind of selection as relevant here, because (by my calculations, if not your own) it has no impact on the probability of being behind a blue door.

What sets your probability of being behind a blue door is the problem specifying that ‘you’ are the experimental subject concerned: that gives me the mental image of a film camera, representing my mind’s eye, following ‘you’ from start to finish - ‘you’ are the specific person who has been selected. I don’t visualize a camera following a survivor randomly selected post-killing. That is what leads me to think of the relevant selection as happening pre-killing (hence ‘pre-selection’).

If that were the case, the camera might show the person being killed; indeed, that is 50% likely.

Pre-selection is not the same as our case of post-selection. My calculation shows the difference it makes.

Now, if the fraction of observers of each type that are killed is the same, the difference between the two selections cancels out. That is what tends to happen in the many-shot case, and we can then replace probabilities with relative frequencies. One-shot probability is not relative frequency.

Yep. But Stuart_Armstrong’s description is asking us to condition on the camera showing ‘you’ surviving.

It looks to me like we agree that pre-selecting someone who

happensto survive gives a different result (99%) to post-selecting someone from the pool of survivors (50%) - we just disagree on which case SA had in mind. Really, I guess it doesn’t matter much if we agree on what the probabilities are for the pre-selection v. the post-selection case.I am unsure how to interpret this...

...but I’m fairly sure I disagree with this. If we do Bernoulli trials with success probability

p(like coin flips, which are equivalent to Bernoulli trials withp= 0.5), I believe the strong law of large numbers implies that the relative frequency converges almost surely topas the number of Bernoulli trials becomes arbitrarily large. Asprepresents the ‘one-shot probability,’ this justifies interpreting the relative frequency in the infinite limit as the ‘one-shot probability.’That condition imposes post-selection.

Wrong—it matters a lot because you are using the wrong probabilities for the survivor (in practice this affects things like belief in the Doomsday argument).

You have things backwards. The “relative frequency in the infinite limit” can be defined that way (sort of, as the infinite limit is not actually doable) and is then equal to the pre-defined probability p for each shot if they are independent trials.

You can’t go the other way; we don’t have any infinite sequences to examine, so we can’t get p from them, we have to start out with it. It’s true that if we have a large but finite sequence, we can guess that p is “probably” close to our ratio of finite outcomes, but that’s just Bayesian updating given our prior distribution on likely values of p. Also, in the 1-shot case at hand, it is crucial that there is only the 1 shot.But not post-selection of the kind that influences the probability (at least, according to my own calculations).

Which of my estimates is incorrect—the 50% estimate for what I call ‘pre-selecting someone who

happensto survive,’ the 99% estimate for what I call ‘post-selecting someone from the pool of survivors,’ or both?Correct.

p, strictly, isn’tdefinedby the relative frequency—the strong law of large numbers simply justifies interpreting it as a relative frequency. That’s a philosophical solution, though. It doesn’t help for practical cases like the one you mention next......for practical scenarios like this we can instead use the central limit theorem to say that p’s likely to be close to the relative frequency. I’d expect it to give the same results as Bayesian updating—it’s just that the rationale differs.

It certainly is in the sense that if ‘you’ die after 1 shot, ‘you’ might not live to take another!

FWIW, it’s not that hard to calculate binary digits of pi:

http://oldweb.cecm.sfu.ca/projects/pihex/index.html

I think I’ll go calculate the millionth, and get back to you.

EDIT: also turns out to be 0.

I don’t feel like reading through 166 comments, so sorry if this has already been posted.

I did get far enough to find that brianm posted this: “The doomsday assumption makes the assumptions that:

We are randomly selected from all the observers who will ever exist...”

Since we’re randomly selecting, let’s not look at individual people. Let’s look at it like taking marbles from a bag. One marble is red. 99 are blue. A guy flips a coin. If it comes up heads, he takes out the red marble. If it comes up tails, he takes out the blue marbles. You then take one of the remaining marbles out at random. Do I even need to say what the probability of getting a blue marble is?

You have to look at individuals in order to get odds for individuals. Your obvious probability of getting a blue marble is for the group of marbles.

But I think we can still look at individual randomly selected marbles.

Before the coin flip let’s write numbers on all the marbles, 1 to 100, without regard to color. And let’s say we roll a fair 100 sided die, and get the number 37.

After the flip and extraction of colored marbles. I look in the bag and find that marble 37 is in it. Given that marble 37 survived, what is the probability that it is blue?

Edit again: OK, I get it. That was kind of dumb.

I read “2/3 of humans will be in the final

^{2}⁄_{3}of humans” combined with the term “doomsday” as meaning that there would be^{2}⁄_{3}of humanity around to actually witness/experience whatever ended humanity. Thus, we should expect to see whatever event does this. This obviously makes no sense. The actual meaning is simply that if you made a line of all the people who will ever live, we’re probably in the latter^{2}⁄_{3}of it. Thus, there will likely only be so many more people. Thus, some “doomsday” type event will occur before too many more people have existed; it need not affect any particular number of those people, and it need not occur at any particular time.It’s not necessary that

^{2}⁄_{3}of the people who ever live be alive simultaneously. It’s only necessary that the last humans not a) all die simultaneously and b) constitutemorethan^{2}⁄_{3}of all humans ever. You can still have a last^{2}⁄_{3}without it being one giant Armageddon that kills them in one go.I agree in principle, but I’m curious as to how much one is stretching the term “doomsday.” If we never develop true immortality, 100% of all humans will die at some point, and we can be sure we’re part of that 100%. I don’t think “death” counts as a doomsday event, even if it kills everyone. Similarly, some special virus that kills people 5 minutes before they would otherwise die could kill 100% of the future population, but I wouldn’t really think of it as a doomsday virus. Doomsday need not kill everyone in one go, but I don’t think it can take centuries (unless it’s being limited by the speed of light) and still be properly called a doomsday event.

That said, I’m still curious as to what evidence supports any claim of such an event actually happening

withoutnarrowing down anything about how or when it will happen.Unless I missed something, “doomsday” just means the extinction of the human species.

Doesn’t it refer to the

dayof the extinction? “Doomsmillenium” doesn’t sound nearly as good, I think.Sure. But the human species can go extinct on one day without a

vast numberof humans dying on that day. Maybe it’s just one little old lady who took a damn long time to kick the bucket, and then finally she keels over and that’s “doomsday”.That’s what Psychohistorian was saying shouldn’t be called doomsday, and I tend to agree.

Yes, and the doomsday argument is not in regards to whether or not doomsday will occur, but when.

The primary reason SIA is wrong is because it counts

youas special only after seeing thatyouexist (i.e., after peeking at the data)My detailed explanation is here.

BTW, whoever is knocking down my karma, knock it off. I don’t downvote anything I disagree with, just ones I judge to be of low quality. By chasing me off you are degrading the less wrong site as well as hiding below threshold the comments of those arguing with me who you presumably agree with. If you have something to say than say it, don’t downvote.