I would rather see the doomsday argument as a version of sleeping beauty.

Different people appear to have different opinions on this kind of arguments. To me, the solution appears rather obvious (in restrospect):

If you ask a decision theory about advice on decisions, then there is nothing paradoxical at all, and the answer is just an obvious computation. This tells you that “probability” is the wrong concept in such situations; rather you should ask about “expected utility” only, as this is much more stable under all kind of anthropic arguments.

I would rather see the doomsday argument as a version of sleeping beauty.

Different people appear to have different opinions on this kind of arguments. To me, the solution appears rather obvious (in restrospect):

If you ask a decision theory about advice on decisions, then there is nothing paradoxical at all, and the answer is just an obvious computation. This tells you that “probability” is the wrong concept in such situations; rather you should ask about “expected utility” only, as this is much more stable under all kind of anthropic arguments.