The Doomsday argument in anthropic decision theory

In Anthropic Decision Theory (ADT), behaviours that resemble the Self Sampling Assumption (SSA) derive from average utilitarian preferences (and from certain specific selfish preferences).

However, SSA implies the doomsday argument, and, to date, I hadn’t found a good way to express the doomsday argument within ADT.

This post will remedy that hole, by showing how there is a natural doomsday-like behaviour for average utilitarian agents within ADT.


Anthropic behaviour

The comparable phrasings of the two doomsday arguments (probability and decision-based) are:

  • In the standard doomsday argument, the probability of extinction is increased for an agent that uses SSA probability versus one that doesn’t.

  • In the ADT doomsday argument, an average utilitarian behaves as if it were a total utilitarian with a higher revealed probability of doom.

Thus in both cases, doomsday agent believes/​behaves as if it were a non-doomsday agent with a higher probability of doom.

Revealed probability of events

What are these revealed probabilities?

Well, suppose that and are two events that may happen. The agent has a choice between betting on one or the other; if they bet on the first, they get a reward of if happens, if they bet on the second, they get a reward of if happens.

If an agent is an expected utility maximiser and chooses over , this implies that , where and are the probabilities the agent assigns to and .

Thus, observing the behaviour of the agent allows one to deduce their probability estimation for and .

Revealed anthropic and non-anthropic probabilities

To simplify comparisons, assume that is an event that will happen with probability ; if the agent bets on , it will get a reward of . The ’s only purpose is to compare with other events.

Then is an event that will happen with an unknown probability, if bet on, the agent will get a reward of . In comparison, is an event that will happen with certainty if and only if humanity survives for a certain amount of time. If the agent bets on and it happens, it will then give a reward of .

The agent need to bet on one of , , and . Suppose that the agent is an average utilitarian, and that their actual estimated probability for human survival is ; thus . If humanity survives, the total human population will be ; if it doesn’t, then it will be limited to .

Then the following table gives the three possible bets and the expected utility the average utilitarian will derive from them. Since the average utilitarian needs to divide their utility by total population, this expected utility will be a function of the probabilities of the different population numbers.

By varying and , we can establish what probabilities the agent actually gives to each event, by comparing with situation when it bets of . If we did that, but assumed that the agent was a total utilitarian rather than an average one, we would get the apparent revealed probabilities given in the third column:

Bet Utility App. rev. prob. if tot.


Note that if --- if the population is fixed, so that the average utilitarian behaves the same as a total utilitarian—then simplifies to , the actual probability of survival.

It’s also not hard to see that strictly decreases as increases, so it will always be less than if .

Thus if we interpret the actions of an average utilitarian as if they were a total utilitarian, then for reward conditional on human survival—and only for those rewards, not for others like betting on --- their actions will seem to imply that they give a lower probability of human survival than they actually do.

Conclusion

The standard doomsday argument argues that we are more likely to be in the first of the list of all humans that will ever live, rather than in the first , which is still more likely than us being in the first , and so on. The argument is also vulnerable to changes of reference class; it gives different implications if we consider ‘the list of all humans’, ‘the list of all mammals’, or ‘the list of all people with my name’. The doomsday argument has no effect on probabilities not connected with human survival.

All these effects reproduce in this new framework. Being in the first means that the total human population will be at least , so the total population grows as shrinks—and , the apparent revealed probability of survival, shrinks as well. Similarly, average utilitarianism gives different answers depending on what reference class is used to define its population. And the apparent revealed probabilities that are not connected with human survival are unchanged from a total utilitarian.

Thus this seems like a very close replication of the doomsday argument in ADT, in terms of behaviour and apparent revealed probabilities. But note that it is not a genuine doomsday argument. It’s all due to the quirky nature of average utilitarianism; the agent doesn’t really believe that the probability of survival goes down, they just behave in a way that would make us infer that they believed that, if we saw them as being a total utilitarian. So there is no actual increased risk.

No comments.