# Self-Indication Assumption—Still Doomed

I recently posted a discussion article on the Doomsday Argument (DA) and Strong Self-Sampling Assumption. See http://lesswrong.com/lw/9im/doomsday_argument_with_strong_selfsampling/

This new post is related to another part of the literature concerning the Doomsday Argument—the Self Indication Assumption or SIA. For those not familiar, the SIA says (roughly) that I would be more likely to exist if the world contains a large number of observers. So, when taking into account the evidence that I exist, this should shift my probability assessments towards models of the world with more observers.

Further, on first glance, it looks like the SIA shift can be arranged to exactly counteract the effect of the DA shift. Consider, for instance, these two hypotheses:

H1. Across all of space time, there is just one civilization of observers (humans) and a total of 200 billion observers.

H2. Across all of space time, there is just one civilization of observers (humans) and a total of 200 billion trillion observers.

Suppose I had assigned a prior probability ratio p_r = P(H1)/P(H2) before considering either SIA or the DA. Then when I apply the SIA, this ratio will shrink by a factor of a trillion i.e. I’ve become much more confident in hypothesis H2. But then when I observe I’m roughly the 100 billionth human being, and apply the DA, the ratio expands back by exactly the same factor of a trillion, since this observation is much more likely under H1 than under H2. So my probability ratio returns to p_r. I should not make any predictions about “Doom Soon” unless I already believed them at the outset, for other reasons.

Now I won’t discuss here whether the SIA is justified or not; my main concern is whether it actually helps to counteract the Doomsday Argument. And it seems quite clear to me that it doesn’t. If we choose to apply the SIA at all, then it will instead overwhelming favour a hypothesis like H3 below over either H1 or H2:

H3. Across all of space time, there are infinitely many civilizations of observers, and infinitely many observers in total.

In short, by applying the SIA we wipe out from consideration all the finite-world models, and then only have to look at the infinite ones (e.g. models with an infinite universe, or with infinitely many universes). But now, consider that H3 has two sub-models:

H3.1. Across all of space time, there are infinitely many civilizations of observers, but the mean number of observers per civilization (taking a suitable limit construction to define the mean) is 200 billion observers.

H3.2. Across all of space time, there are infinitely many civilizations of observers, but the mean number of observers per civilization (taking the same limit construction) is 200 billion trillion observers.

Notice that while SIA is indifferent between these sub-cases (since both contain the same number of observers), it seems clear that DA still greatly favours H3.1 over H3.2. Whatever our prior ratio r’ = P(H3.1)/P(H3.2), DA raises that ratio by a trillion, and so the combination of SIA and DA also raises that ratio by a trillion. SIA doesn’t stop the shift.

Worse still, the conclusion of the DA has now become far *stronger*, since it seems that the only way for H3.1 to hold is if there is some form of “Universal Doom” scenario. Loosely, pretty much every one of those infinitely-many civilizations will have to terminate itself before managing to expand away from its home planet.

Looked at more carefully, there is some probability of a civilization expanding p_e which is consistent with H3.1 but it has to be unimaginably tiny. If the population ratio of an expanded civilization to a a non-expanded one is R_e, then H3.1 requires that p_e < 1/R_e. But values of R_e > trillion look right; indeed values of R_e > 10^24 (a trillion trillion) look plausible, which then forces p_e < 10^-12 and plausibly < 10^-24. The believer in the SIA has to be a really strong Doomer to get this to work!

By contrast the standard DA doesn’t have to be quite so doomerish. It can work with a rather higher probability p_e of expansion and avoiding doom, as long as the world is finite and the total number of actual civilizations is less than 1 / p_e. As an example, consider:

H4. There are 1000 civilizations of observers in the world, and each has a probability of 1 in 10000 of expanding beyond its home planet. Conditional on a civilization not expanding, its expected number of observers is 200 billion.

This hypothesis seems to be pretty consistent with our current observations (observing that we are the 100 billionth human being). It predicts that—with 90% probability—all observers will find themselves on the home planet of their civilization. Since this H4 prediction applies to all observers, we don’t actually have to worry about whether we are a “random” observer or not; the prediction still holds. The hypothesis also predicts that, while the prospect of expansion will appear just about attainable for a civilization, it won’t in fact happen.

P.S. With a bit of re-scaling of the numbers, this post also works with observations or observer-moments, not just observers. See my previous post for more on this.

- 26 Apr 2012 11:15 UTC; 1 point) 's comment on Papers framing anthropic questions as decision problems? by (
- 21 Apr 2012 20:35 UTC; 1 point) 's comment on SIA won’t doom you by (

I don’t see why it would make more sense to speak of the probability that you are the nth observer given some theory than the probability that you are a rock or that you are a nonsentient AI. If you object that anthropic reasoning is about consciousness, you’d have to bite the bullet and say that doomsday argument-like reasoning is invalid for nonsentient computer programs.

Also, if you’re interested in anthropics, I recommend this paper by the Future of Humanity Institute’s Stuart Armstrong.

Thanks for the response.

Clearly, you can define whatever theory you like. But to be testable, the theory has to make some predictions about our observations. For this, it is not sufficient for a theory to describe in “third party” terms what events objectively happen in the world. There also has to be a “first party” (or “indexical”) component to answer the question: OK then, but what should we expect to observe?

As an analogy, you could treat the third party component as an objectively accurate map, and the first party component as a “You are here” marker placed on the map. The map is going to be pretty useless without the marker.

Similarly, if a theory of the world doesn’t have a “first party” component, then in general it is not possible to extract predictions about what we should be observing. This is especially true if the “third party” component describes a very large of infinite universe (or multiverse). Further, if the “first party” component predicts that we should be rocks or nonsentenient AIs, well then it seems that we can falsify that component straight away. This would be similar to a “you are here” marker out in the ocean somewhere, when we know we are on dry land.

Finally, just to advise, my post was not a general defence of the Doomsday Argument, but rather a discussion on whether the SIA “move” works to defuse its conclusions. It’s quite noticeable in the DA literature that lots of people are really sure that there is something wrong with the DA, and they give all sorts of different diagnoses as to where the flaw lies, but then most of these objections either turn out not to work, or to be objections against Bayesian inference full stop. There has been—as far as I can tell—a general view that the SIA

doeswork as a way of defusing the DA, and so it is a real objection to the DA, if you accept the SIA. The discussion then diverts into whether it is legitimate to accept the SIA.But I disagree with that view. My analysis is effectively this: “Fine, I will grant you the SIA for the sake of argument. So you now have an infinite world. But then—oh dear—on close inspection, it looks like you haven’t defused the DA after all. In fact you just made it even stronger.”

This comment made me think about anthropics, but I never got back to you about my conclusions. What I decided is that the first party component looks really important, but it is irrelevant to decision making (except insofar as our utility functions value the existence of conscious beings). For example, if one world has 100 of me and the other has only one, I might want to precommit to some strategy based on just a Solomonoff prior or similar, with no anthropic considerations. If I do that, I wouldn’t want to change my mind based on the additional knowledge that I exist. That was rather jargonny, so see some introductions to UDT or an illustrative though experiment.

I therefore disagree with your conclusion about the DA. If Omega tells me that he is going to create either world 3.1 or world 3.2, I would be surprised in some sense to find myself as one of the first beings in world 3.2, but finding out that I am one of the first 200 billion observers does not give me any reason to violate a rationally chosen precommitment. This seems odd, but the decision-theoretic considerations necessitated by the various thought experiments associated with UDT just seem to be a different sort of thing than first person experience. (I’m referring to the thought experiments specifically mentioned in the above links, in case that isn’t clear. Also, this isn’t in those links, but it might be illustrative to consider a nonsentient optimization process. Such a thing clearly does not have first-person experience and isn’t in our reference class, whatever that means, but it can still apply decision theory to achieve its goals.)

I’m not sure I get this. I think I’ve grasped the high-level point about UDT (that the epistemic probabilities strictly never update). So that if a UDT agent has a Solomonoff prior, they always use that prior to make decisions, regardless of evidence observed.

However, UDT agents have still got to bet in some cases, and they still observe evidence which can influence their bets. Suppose that each UDT agent is offered a betting slip which pays out 1 utile if the world is consistent with H3.1, and nothing otherwise. Suppose an agent has observed or remembers evidence E. How much in utiles should the agent be prepared to pay for that slip? If she pays x (< 1) utiles, then doesn’t that define a form of subjective probability P[H3.1|E] = x? And doesn’t that x vary with the evidence?

Let’s try to step it through. Suppose in the Solomonoff prior that H3.1 has a probability p31 and H3.2 has a probability p32. Suppose also that the probability of a world containing self-aware agents who have discovered UDT is pu and the probability of an infinite world with such agents is pui.

Suppose now that an agent is aware of its own existence, and has reasoned its way to UDT but doesn’t yet know anything much else about the world; it certainly doesn’t know yet how many observers there have been yet in its own civilization. Let’s call this evidence E0.

Should the agent currently pay p31 for the betting slip, or pay something different as her value of P[H3.1 | E0]? If something different, then what? (A first guess is that it is p31/pu; an alternative guess is p31/pui if the agent is effectively applying SIA). Also, at this point, how much should the agent pay for a bet that pays off if the world is infinite: would that be pui/pu or close to 1?

Now suppose the agent learns that she is the 100 billionth observer in her civilization, creating evidence E1. How much should the agent now pay for the betting slip as the value of P[H3.1| E1]? How much should the agent pay for the infinite bet?

Finally, do the answers depend at all on the form of the utility function, and on whether correct bets by other UDT agents add to that utility function? From my understanding of Stuart Armstrong’s paper, the form of the utility function does matter in general, but does UDT make any difference here? (If the utility depends on other agents’ correct bets, then we need to be especially careful in the case of worlds with infinitely many agents, since we are summing over infinitely many wins or losses).

That analysis uses standard probability theory and decision theory, but that doesn’t work in this sort of situation.

Compare this to Psy-Kosh’s non-anthropic problem. Before you are told whether you are a decider, you can see, in the normal way, that it is better to follow the strategy of choosing “nay” rather than “yea” no matter what. If you condition on finding out that you are a decider the same way that you would condition on any piece of evidence, it appears that it would be better to choose “yea”, but we can see that someone following the strategy of updating like that will get less utility, in expectation, than someone who follows the strategy of choosing “nay” in response to any evidence. Therefore, decision theory as usual leads to suboptimal outcomes in situations like this.

In your situation, we can follow the same principles to see that standard decision can fail there too, depending on your utility function. From a UDT perspective, you are choosing between the strategies “If you have a low birth rank, be willing to pay almost a full util, if necessary, for the betting slip.” and “Only buy the slip if it is reasonably priced, ie. costs < p31 utils, no matter what you observe.” and you should weigh the resulting utility differences using the prior (p31, p32) for hypotheses H3.1 and H3.2 that you assign before you observe your birth rank. A utilitarian would know, before observing their actual birth rank, that they would obtain the highest utility by following the second strategy, so updating as you suggest would give a different answer than the one that looks best in advance, in the same way that updating in the usual way in Psy-Kosh’s problem produces a suboptimal outcome. Stuart goes into a bit more detail on this, discussing more types of utility functions and doing more of the math explicitly.

You seem to be arguing here that a UDT agent should NEVER update their betting probabilities (i.e. never change the amount paid for a betting slip) regardless of evidence. This seems plain wrong to me in general e.g. imagine I’m offered a lottery ticket for $1 a few minutes after the draw and I already saw the numbers drawn; if they match the numbers on the ticket, this is a great deal and I should take it. If UDT really says that I shouldn’t, then I’m sticking with CDT!

So I don’t think that is what you are arguing (i.e. don’t update betting probabilities in general); but you are arguing not to update the betting probability in this case. Correct?

Here is another example to consider, based on other types of evidence. Hypothesis B1 is that the universe is infinite, homogeneous, isotropic and has a background radiation of 1K. Hypothesis B3 is the same, except with background radiation of 3K. These have Solomonoff prior probabilities of pb1 and pb3 respectively. A UDT agent measures the background radiation and it comes out at 3K. Does a UDT agent still pay pb1 for the betting slip which pays out if B1 is correct? (Remember that there are infinitely many UDT agents in both B1 and B3 worlds, and infinitely many agents in each case will have just made a 3K measurement, because of measurement errors in the 1K world. The agent still doesn’t know for sure which sort of world she’s in.)

I presume you will want the agent to change betting probabilities in the background radiation case (because if she doesn’t, she’s about to lose against a CDT agent who does e.g. when betting on subsequent measurements)? And if it’s right to change in that case, why isn’t it right to change in the H31/H32 case?

Yes, I do think you should update in that case. The one-sentence version of UDT (UDT1.1 more precisely) is that you precommit to the best possible strategy and then follow that strategy. The strategy is allowed to recommend updating. In fact, the agent in my previous example explicitly considered the strategy “If you have a low birth rank, be willing to pay almost a full util, if necessary, for the betting slip.” which tells the agent to make different bets depending on what they observe. This strategy was not chosen because the agent thought that it would result in a lower expected utility, but in other situations, such as the example you just presented, the optimal strategy does entail taking different actions in response to evidence. In many thought experiments, UDT adds up to exactly what you would expect.

Actually, I’m not sure whether the strategy that I selected is optimal. It would be optimal if a single world of either the type mentioned in H3.1 or the type from H3.2 existed, but we can’t sum that over an infinite number of worlds since the sum diverges. We could switch to a bounded utiliy function, but that’s a different issue entirely.

I’m pretty sure that it isn’t optimal, and for a much simpler reason than having infinitely many worlds. The strategy of “Only buy the slip if it is reasonably priced, ie. costs < p31 utils, no matter what you observe” leads to a Dutch Book. This takes a bit of explaining, so I’ll try to simplify.

First let’s suppose that the two hypotheses are the only candidates, and that in prior probability they have equal probability

^{1}⁄_{2}.H3.1. Across all of space time, there are infinitely many civilizations of observers, but the mean number of observers per civilization (taking a suitable limit construction to define the mean) is 200 billion observers.

H3.2. Across all of space time, there are infinitely many civilizations of observers, but the mean number of observers per civilization (taking the same limit construction) is 200 billion trillion observers.

We’ll also suppose that both H3.1 and H3.2 imply the existence of self-aware observers who have reasoned their way to UDT (call such a being a “UDT agent”), and slightly simplify the evidence sets E0 and E1:

E0. A UDT agent is aware of its own existence, but doesn’t yet know anything much else about the world; it certainly doesn’t know yet how many observers there have been yet in its own civilization.

(If you’re reading Stuart Armstrong’s paper, this corresponds to the “ignorant rational baby stage”).

E1. A UDT agent discovers that it is among the first quadrillion (thousand trillion) observers of its civilization.

Again, we define P[X|Ei] as the utility that the agent will pay for a betting slip which pays off 1 utile in the event that hypothesis X is true. You are proposing the following (no updating):

P[H3.1 |E0] = P[H3.2|E0] =

^{1}⁄_{2}, P[H3.1 |E1] = P[H3.2|E1] =^{1}⁄_{2}Now, what does the agent assign to P[E1 | E0]? Imagine that the agent is facing a bet as follows. “Omega is about to tell you how many observers there have been before you in your civilization. This betting slip will pay 1 utile if that number is less than 1 quadrillion.”

It seems clear that P[H3.1 & E1 | E0] is very close to P[H3.1 | E0]. If H3.1 is true, then essentially all observers will learn that their observer-rank is less than one quadrillion (forget about the tiny tail probability for now).

It also seems clear that P[H3.2 & E1| E0] is very close to zero, since if H3.2 is true, only a miniscule fraction of observers will learn that their observer-rank is less than one quadrillion (again forget the tiny tail probability).

So to good approximation, we have betting probabilities P[E1 & H3.1 | E0] = P[E1 | E0] = P[H3.1 | E0] =

^{1}⁄_{2}and P[~E1 | E0] =^{1}⁄_{2}. Thus the agent should pay^{1}⁄_{2}for a betting slip which pays out 1 utile in the event E1 & H3.1. The agent should also pay^{1}⁄_{4}for a betting slip which pays out^{1}⁄_{2}utile in the event ~E1.Now, suppose the agent learns E1. According to your proposal, the agent still has P[H3.1 | E1] =

^{1}⁄_{2}, so the agent should now be prepared to sell the betting slip for E1 & H3.1 for the same price that she paid for it i.e. she sells it again for^{1}⁄_{2}a utile.Oops: the agent is now guaranteed to lose

^{1}⁄_{4}utile in all circumstances, regardless of whether or not she learns E1 or ~E1. If she learns ~E1, then she pays^{3}⁄_{4}for her two bets and wins the ~E1 bet, for a net loss of^{1}⁄_{4}. If she learns E1 then she loses the bet on ~E1 and her bet on H3.1 & E1 is cancelled out since she has bought and sold the slip at the same price.Incidentally, Stuart Armstrong discusses this issue in connection with the “Adam and Eve” problem, though he doesn’t give an explicit example of the Dutch book (I had to construct one). The resolution Stuart proposes is that an agent in the E0 (“ignorant rational baby”) stage should precommit not to sell the betting slip again if she learns E1 (or, strictly, not to sell it again unless the sale price is very close to 1 utile). Since we are discussing UDT agents, no such precommitment is needed; the agent will do whatever she should have precommited to do.

In practice this means that on learning E1, the agent follows the commitment and sets her betting probability for H3.1 very close to 1. This is, of course, a Doomsday shift.

You’re still, y’know,

updating. Consider each of these bets from the updateless perspective, as strategies to be willing to accept such bets.The first bet is to “pay

^{1}⁄_{2}for a betting slip which pays out 1 utile in the event E1 & H3.1”. Adopting the strategy of accepting this kind of bet would result in^{1}⁄_{2}util for an infinite number of beings and −1/2 util for an infinite number of beings if H3.1 is true and would result in −1/2 util for an infinite number of beings if H3.2 is true.If we could aggregate the utilities here, we could just take an expectation by weighing them according to the prior (equally in this case) and accept the bet iff the result was positive. This would give consistent, un-Dutch-bookable, results; since expectations sum, the sum of three bets with nonnegative expectations must itself have a nonnegative expectation. Unfortunately, we can’t do this since, unless you come up with some weird aggregation method other than total or average for the utility function (though my language above basically presumed totalling), the utility is a divergent series and reordering divergent series changes their sums. There is no correct ordering of the people in this scenario, so there is no correct value of the expected utility.

Moving on to the second bet, “pay

^{1}⁄_{4}for a betting slip which pays out^{1}⁄_{2}utile in the event ~E1”, we see that the strategy of accepting this gives +1/2 infinitely many times and −1/2 infinitely many times if H3.1 is true and it gives −1/2 infinitely many times if H3.2 is true. Again, we can’t do the sums.Finally the third bet, rephrased as a component of a strategy, would be to sell the betting slip from the first bet back for∞ is, of course, indeterminate, so we can again neither recommend accepting or declining this bet without a better treatment of infinities.

^{1}⁄_{2}util again if E1 is observed. Presumably, this opportunity is not offered if ¬E1, so there is no need for the agent to decide what to do in this case. This gives −1/2 infinitely many times if H3.1 and +1/2 infinitely many times if H3.2. The value of^{1}⁄_{2}−1/2∞ +^{1}⁄_{2}^{1}⁄_{2}I’m being careful to define the expressions P[X|Ei] as the amount paid for a betting slip on X in an evidential state Ei. This is NOT the same as the agent’s credence in hypothesis X. I agree with you that credences don’t update in UDT (that’s sort of the point). However, I’m arguing that betting payments must change (“update” if you like) between the two evidential states, or else the agent will get Dutch booked.

You describe your strategy as having an infinite gain or loss in each case, so you don’t know whether it is correct (indeed you don’t know which strategy is correct for the same reason). However, earlier up in the thread I already explained that this problem will arise if an agent’s utility depends on bets won or lost by other agents. If instead, each agent has a private utility function (and there is no addition/subtraction for other agents’ bets; only for her own) then this “adding infinities” problem doesn’t arise. Under your proposed strategy (same betting payments in E0 and E1), each individual agent gets Dutch-booked and makes a guaranteed loss of

^{1}⁄_{4}utile so it can’t be the optimal strategy.What is optimal then? In the private utility case (utility is a function only of the agent’s own bets), the optimal strategy looks to be to commit to SSA betting odds (which in the simplified example means an evens bet in the state E0, and a Doomsday betting shift in the state E1).

If the agent’s utility function is an average over all bets actually made in a world (average utilitarianism) then provided we take a sensible way of defining the average, such as take the mean (betting gain—betting loss) over N Hubble volumes, then take the limit as N goes to infinity, the optimal strategy is again SSA betting odds.

If the agent’s utility function is a sum over all bets made in a world, then it is not well-defined, for the reasons you discuss: we can’t decide how to bet without a properly-defined utility function. One approach to making it well-defined may be to use non-standard arithmetic (or surreals), but I haven’t worked that through. Another approach is to sum bets only within N Hubble volumes of the agent (assume the agent doesn’t really care about far far away bets), and then only later take the limit as N tend to infinity. This leads to SIA betting odds.

Until recently, I thought that SIA odds meant betting heavily on H3.2 in the state E0, and then reverting to an evens bet in the state E1 (so it counters the Doomsday argument). However, the more recent analysis of SIA indicates that there is still a Doomsday shift because of “great filter” arguments (a variant of Fermi’s paradox), so the betting odds in state E1 should still be weighted towards H3.1.

Basically it doesn’t look good, since every combination of utility function or SSA with or without SIA is now creating a Doomsday shift. The only remaining let out I’ve been considering is a specially-constructed reference class (as used in SSA), but it looks like that won’t work either: in Armstrong’s analysis, we don’t get to define the reference class arbitrarily, since it consists of all linked decisions. (In the UDT case, all decisions that are made by any agents anywhere applying UDT).

Here’s something I’ve thought about as a refinement of SIA:

A universe’s prior probability is proportional to its original, non-anthropic probability, multiplied by its efficiency at converting computation-time to observer-time. You get this by imagining running all universes in parallel, giving them computational resources proportional to their (non-anthropic) prior probability (as in Lsearch). You consider yourself to be a random observer simulated in one of these programs. This solves the problem of infinite universes (since efficiency is bounded) while still retaining the advantages of SIA.

One problem is that our universe appears to be very inefficient at producing consciousness. However this could be compensated for if the universe’s prior probability is high enough. Also, I think this system favors the Copenhagen interpretation over MWI, because MWI is extremely inefficient.

Another thought regarding the anthropic principle: you can solve all anthropic questions by just using UDT and maximizing expected utility. That is, you answer the question: “A priori, before I know the laws of the universe, is it better for someone in my situation to do X?”. Unfortunately this only works if your utility function knows how to deal with infinite universes, and it leaves lots of questions (such as how to weight many different observers, or whether simulations have moral value) up to the utility function.

On the other hand if you have a good anthropic theory, then you can derive a utility function as E[personal utility | anthropic theory]; that is, what’s the utility if you don’t know who you are yet? In this case you judge an anthropic theory by P(anthropic theory | random observer experience is yours), using Bayes’s rule, and extrapolate your personal utility function to other people using it.

It seems the computation you describe will run for infinite time, and will simulate infinitely many observers, but only finitely many in any given time period. Correct? If so, you still have my SIA problem.

If I am a “random” observer, then for any finite number N, I should expect to be simulated later than N steps into the whole computation. (Well, technically there is no way I could be sampled uniformly at random from a countably-infinite sequence of observers, except through some sort of limit construction; but let’s ignore this, and just suppose that for some “really big N” I have a “really small” probability of being simulated before N).

Now, imagine listing all the “small” finite universes which can be simulated in—say—fewer than 10^1000 steps from start to finish. There are at most 2^(10^1000) of those, and their simulations will all finish. So there must be some Nth computational step which happens after the very last small universe simulation has finished, and by the above argument I should expect myself to be simulated after step N. So there is still overwhelming prior probability that I find myself in a “big” (or infinite) universe. The SIA is still wiping out the small universes a priori.

It was a nice try though; I had to think about this one a bit...

Ok, we can posit that if any of the universes ends, we just re-start it from the beginning. Now if there is 1 universe that runs for 1000 years with 1000 observers, and 1 universe that runs forever with 1000 observers, and their laws of physics were equiprobable, then their SIA probabilities are also equiprobable. The observers in the finite universe will be duplicated infinite times, but I don’t think this is a problem (see Nick Bostrom’s duplication paper). Also, some infinite universes might have an infinite number of finite simulations inside them, so it’s somewhat likely for an observer to be in a finite universe simulated by an infinite universe.

I think you can deal with the infiniteness by noting that, for any sequence of observations, that sequence will be observed by some proportion of the observers in the multiverse. So you can still anticipate the future by comparing P(past observations + future observation) among the possible future observations.

Sorry, I don’t quite follow this…

Your example considers an infinite universe with 1000 observers (and then presumably an infinite amount of dead-space). You say this counts for the same weighted probability as a finite universe with 1000 observers (here assuming the universes had the same Levin probability originally).

OK, I’m with that so far: but then why doesn’t an infinite universe with 10^1000 observers count for 10^997 more weight than a finite universe with 1000 = 10^3 observers?

Finally, why doesn’t an infinite universe with infinitely many observers count for infinitely more weight than a finite universe with 1000 observers? I’m just trying to understand your metric here.

Alternatively, when you discuss re-running the finite 1000-observer universe from the start (so the 1000 observers are simulated over and over again), then is that supposed to increase the weight assigned to the finite universe? Perhaps you think that it should, but if so, why? Why should a finite universe which stops completely receive greater weight than an otherwise identical universe whose simulation just contines forever past the stop point with loads of dead space?

In the original example I was assuming the 1000 observers were immortal so they contribute more observer-seconds. I think this is a better presentation:

We have:

a finite universe. 1000 people are born at the beginning. The universe is destroyed and restarted after 1000 years. After it restarts another 1000 people are born, etc. etc.

an infinite universe. 1000 people are born at the beginning. Every 1000 years, everyone dies and 1000 more people are born.

If both have equal prior probability and efficiency, we should assign them equal weight. This is even though the second universe has infinitely more observers than (a single copy of) the finite universe.

Yes.

Because there are more total observers. If the universe is restarted there are 1000 observers per run and infinite runs, as opposed to 1000 observers total.

For one, only the first can be simulated by a machine in a finite universe. Also, in a universe with infinite time but not memory, only the first can be simulated infinite times.

Also, the universe with the dead space might contain simulations of finite universes (after all, with infinite time and quantum mechanics everything happens). Then almost all of the observers in the infinite universe are simulated inside a finite universe, not in the infinite universe proper.

Another argument: if the 2 universes (infinite with infinite observers (perhaps by restarting), infinite with finite observers) are run in parallel, almost all observers will be in the first universe. It seems like it shouldn’t make a difference if the universes are being run in parallel or if just one of them was chosen randomly to be run.

OK, I now understand how you’re defining your probability measure (and version of the SIA). It seems odd to me to weight a universe that stops higher than an identical one that runs forever (with blank space after the stop point). But it’s your measure, so let’s go with that. Basically, it seems you’re defining a prior by:

P(I’m an observer in universe U) = P_Levin(U) x Fraction of time simulating U which is spent simulating an observer

and then renormalizing. Let’s call the “fraction” the computational observer density of universe U.

One thought here is that your measure has a very similar impact to Neal’s FNC that I discussed elsewhere in the thread. It will give a high weighting towards models of the universe with a high density of intelligent civilizations, such that they will appear in a high fraction of star systems, but then die out before expanding and reaching our own solar system. So to that extent it is still “doomerish”. Or, worse, it gives even higher weighting towards us not taking our observations seriously at all, so that contrary to appearances, our universe really is packed full with a very high density of observers (from expanded civilizations) and we’re in a simulation or experiment that fools us into thinking the universe is pretty much empty. (If we’re in a simulation

withinU, then we’re in a sub-simulation when simulating U).On the other hand, your measure is based on computational density, rather than physical density, so it might not have quite this effect. In particular, suppose the simulation of U runs at very different rates depending on the complexity of what it is simulating. It whizzes through millions of years of empty space in a few steps, takes a bit longer when simulating stars and planetary systems, slows down considerably when it has to simulate evolution of life on a planet, and utterly bogs down when it gets to conscious observers on the planet (since at that point it needs a massively detailed step by step simulation of all their neuron firings to work out what is going to happen next).

That, I think, avoids the distortion towards very high physical density of observers. Even if observers are—as they appear to be—rare in our universe, they could still be taking up most of the computing time. But in that case, the measure is also insensitive to the absolute number of observers simulated, so doesn’t give much of an SIA weighting towards large numbers of observers in the first place. We could imagine for instance that the simulation of U runs through the “doom” of the human race (and other complex life) then since there is nothing complex left to slow it down any more it speeds up, whizzes through to the end of the universe and (under your measure) starts again. It will still spend most of its computational steps simulating observers.

I found this article, which appears to formalize something like what I want to say about the Doomsday problem. In brief, your knowledge of your birth-rank is an arbitrary thing to use in deciding your beliefs about the likelihood of Doomsday. If we pick other ranks, like height-rank (living in an isolated village in the Himalayas), then adding new information seems to justify weird changes in belief about the total human population. In the strangest case, discovery that you are one of the last humans to die (by learning of something like an impending asteroid impact) seems to justify a change in belief about the accuracy historical records about the number of humans who ever lived. Yet that change in belief seems utterly unjustified by the only piece of new information (incoming asteroid). To avoid these problems, Professor Neal appears to endorse rejection of indexical reasoning as inherently non-informative—he calls this “Full Non-indexical Conditioning (FNC).

Again, any comments on mathematical errors and alternate perspectives are greatly appreciated. I’m particularly concerned in relying on this paper because of its very cute (NOT a compliment) avoidance of Newcomb’s problem.

Tim, I had a look at the article on full non-indexical conditioning (FNC).

It seems that FNC still can’t cope with very large or infinite universes (or multiverses), ones which make it certain, or very nearly so, that there will be someone, somewhere making exactly our observations and having exactly our evidence and memories. Each such big world assigns equal probability (1) to the non-indexical event that someone has our evidence, and so it is impossible to test between them empirically under FNC.

See one of my earlier posts where I discuss an infinite universe model which has 1K background radiation, but a tiny minority of observers who conclude that it has 3K temperature (as their observations are misleading). FNC gives us no reason to believe that our universe is not like that i.e. no reason to favour the alternative model where background radiation actually is 3K. There seems something badly wrong with this as a reasoning principle.

To his credit, Neal is quite open about this, and proposes in effect to ignore such “big” worlds. In Bayesian terms, he will have to assign them prior probability zero, because otherwise FNC itself will drive their posterior probability up to almost one. However, in my view it is unreasonable to assign a consistent model universe (or class of models) prior probability zero, just because if you don’t then that messes up your methodology!

Another criticism is that, if I understand Neal’s article correctly, FNC creates a strong form of Doomsday argument anyway, though on somewhat different grounds. The reason is that if you restrict it to model universes of a finite size (say size of the observable universe) then FNC favours universes with a high density of civilizations of observers (ones where practically every star system gives rise to life and an intelligent civilisation). But then, to resolve Fermi’s paradox, each such civilisation must have an extremely small probability of ever expanding out of its home star system: it looks like we are forced to accept an expansion probability p_e < 10^-12 (or even < 10^-24 again). That’s hard to understand except through some sort of “Universal Doom” law, whereby technological civilisations terminate themselves before using their technology to expand.

So, like SIA, the attempt to avoid the DA seems to end up strengthening it.

I agree with your reading, but I do have a terminological nitpick.

I think that the thing you are calling the FNC-Doomsday argument is just a restatement of the Great Filter argument that is inherent in the Fermi paradox analysis. But I don’t think that the Great Filter argument necessarily implies imminent doomsday. For all we know, the Filter was behind us (i.e. life from non-life really is that unlikely). As evidence from science shows more and more of our precursors are relatively likely, the probability that the Great Filter is in front of us increases. But I don’t think this analysis gives us much insight into when Filter will happen.

By contrast, I think clearer communication results by limiting the label “Doomsday argument” to the class of ideas using anthropic reasoning to predict imminent cataclysm. I agree that most anthropic reasoning appears to suggest imminent doomsday—although I still agree with Neal that references classes are moral constructs and it is strange for different moral concepts to effect on empirical reasoning.

My pedantry for different labels feels a little like disputing definitions but I really am just trying to be more clear about what arguments are similar (or dissimilar to) other arguments. And I think the Great Filter has fewer implications than the Doomsday argument, which makes it profitable to treat them separately.

I understand your point about infinite universes, but I think the assertion is justified by empirical evidence. My understanding of the science is that there just doesn’t seem to be enough stuff out there for infinite universe to be a reasonable hypothesis.

I’ve been re-reading this thread, and think I’ve found an even bigger problem with FNC, even if it is just restricted to “small” finite universes.

As discussed above, in such universe models, FNC causes us to weight our probability estimates towards models with a high density of civilizations e.g. ones where practically ever star system that can gives rise to life and an intelligent civilization. And then, if we take our observations seriously, none of those civilizations can have ever expanded into our own solar system, so they must have a very low expansion probability. (Incidentally, even if they were blocked somehow from entering our own solar system, because of quarantine policy, Prime Directive, Crystal Spheres etc, and the block was somehow enforced universally and over geological timeframes, we still ought to see some evidence of them occupying other nearby star systems—radio emissions, large scale engineering projects, Dyson spheres etc.)

But the worse problem is that FNC creates an even stronger weight towards

nottaking our observations seriously in the first place. It seems there is even higher probability under FNC that some civilizations have expanded, have occupied the whole universe, and have populated it very densely with observers. We are then part of some subset of observers who have been deliberately “fooled” into thinking that the universe is largely unoccupied, and that we’re in a primitive civilization. Probably we’re in some form of simulation or experiment.Unfortunately, I think this is all pretty devastating to FNC:

If infinite universes are allowed at all, then under FNC they will receive weighted probability very close to one.

If infinite universes are ruled out (prior probability zero), then very large finite universes will now receive weighted probability very close to one.

If very large finite universes are also ruled out (also probability zero), then skeptical hypotheses where the universe is not as it seems, and we are instead in some sort of simulation or experiment, receive weighted probability close to one.

If skeptical hypotheses are also ruled out (let’s give them probability zero as well) then we are still weighted towards universes with a high density of civilizations, and a “Great Filter” which lies in front of us. This still gives pretty doomerish conclusions.

So we have to do a lot of ad hoc tweaking to get FNC to predict anything sensible at all. And then what it does predict doesn’t sound very optimistic anyway.

No point in arguing about definitions, though Neal also describes his argument as having a “doomsday” aspect (see page 41). On infinite universes, I have no problem with an a posteriori conclusion that the universe is (likely to be) finite; my problem is an a priori assumption that the world

mustbe finite. Remember the prior probability of infinite worlds has to bezeroto avoid FNC giving silly conclusions.On a more technical point, I think I have spotted a difficulty with Neal’s distribution plots on page 39, and am not yet sure how much of a problem this is with his analysis.

Neal considers a parameter p where pM(w) is the probability (density) that someone with his exact memories appears in a particular region of spacetime w. This should be really tiny of course… if memories have 10^11 bits as Neal suggests then p would be something like 2^-(10^11). Neal then considers a parameter f where fA(v) is the probability of a species arising at a particular region v in spacetime preventing him existing with his memories… this is roughly the probability of that species expanding, what I called the probability p_e, multiplied by the proportion of the universe that the species expands into.

However, he then wants to normalise the plots so that (looking at the means in prior probability) we have log p + log f = 0; which initially seems impossible because it would force the prior mean of f up to about 2^10^11 whereas since f is a probability times a proportion, we must have f ⇐ 1.

Neal says we can compensate by rescaling the factors A(w) or M(w) e.g. we could perhaps decide to make M(w) something like 2^-(10^11) so that p is ~ 1. However this then requires his V parameter, obtained by integrating M(w)A(w) to be similarly like 2^-(10^11) i.e. we must have V very close to zero. So how can he consider cases with V = 0.1, V = 1 or V = 10 ? Something is wrong with the scaling somewhere.

I am simply not qualified to say anything insightful about the math point you make. The presumptuous philosopher doesn’t bother me, but that may just be scope insensitivity talking.

On the Great Filter, I agree that believing in the Filter and science discovering things like life is created easily (including complex life), Sol-like suns are common, and Earth-like planets occur often with Sol-like suns makes it seem likely that the Filter is in front of us. But it doesn’t say

when. The Great Filter is consistent with The Crystal Spheres, which the anthropic Doomsday argument just isn’t.Tim—thanks. I’ll check out the article.

I’ve also had a look at the Armstrong paper recommended by “endoself” above. This is actually rather interesting, since it relates SIA and SSA to decision theory.

Broadly, Armstrong says that if you are trying to maximize

totalexpected utility then it makes sense to apply SIA + SSA together (though Armstrong just describes this combination as “SIA”). Whereas if you are trying to maximizeaverageutility per person, or selfishly, your own individual utility, then it makes sense to apply SSAwithoutSIA. This supports both the “halfer” and “thirder” solutions to the Sleeping Beauty problem, since both are justified by different utility functions. Very elegant.However, this also seems to tie in with my remarks above, since total utility maximizers come unstuck in infinite universes (or multiverses). Total utility will be infinite whatever they do, and the only sensible thing to do is to maximize personal utility or average utility. Further if a decider is trying to maximize total expected utility, then they really force themselves to decide that the universe is infinite, since if they guess right, then the positive payoff from that correct guess will be realized infinitely many times, whereas if they guess wrong then the negative payoff from that incorrect guess will be realized only finitely many times. So I think this suggests—in a rather different way—that SIA doesn’t work as a way out of DA. Also, that it’s rather silly (since it creates an overwhelming bias towards guesses at infinite universes or multiverses).

One other thing I don’t get is Armstrong’s—rather odd—claim that if you are an average utility (or selfish utility) maximizer, then you shouldn’t care anyway about “Doom Soon”. So in practice there is no decision-theoretic shift in your behaviour brought about by DA. This strikes me as just plain wrong—an average utilitarian would still be worried about the big “disutility” of people who live through (or close to ) the Doom. A selfish utility maximizer would worry about the chance of seeing Doom himself.

Great post.

I think that future unFAI could take DA seriously and reduce number of observers in his civilization in order to be out of DA logic.

He could:

a) exterminate people

b) unite people in one entity

c) prevent birthes and make people immortal

d) dont allow people to know their real birth rank number by puting them in simulations.

Will it really work? If external universal superdoom exist (vacuum instability), it will not work. If this action is superdoom itself, it works, but there is no need to do it. It is something like Newcomb problem.

It seems this would work for versions of DA based on “observers” since the unFAI drastically reduces the number of observers in the future. I’m not so sure about observer moments though, since presumably the unFAI (or its immortal human colleagues) will count for rather a lot of those…

A version of DA based on observer moments gives sharper predictions (higher probability of observing right now; more imminent doom) and is simpler to apply than a version based on observers. See my other post on that. http://lesswrong.com/lw/9im/doomsday_argument_with_strong_selfsampling/

Yes, it is good point about unFAI observers moments. He could make himself to forget about DA by adding external program which ban all thoughts about DA.