I’m sorry I’ve given the impression of not engaging with what was actually said. Let me try to say what I meant more clearly:
The Shifting Mortality Rates section asks: “If background mortality drops, how does that change optimal timing?” It then runs the math for a scenario where mortality plummets all the way to 1/1400 upon entering Phase 2, and shows the pause durations get somewhat longer.
What it doesn’t ask is: “How likely is it that background mortality drops meaningfully in the next 20-40 years without ASI, and what does that do to the expected value calculation?”
I expect the latter because it’s actually pretty important? Like, look at these paragraphs in particular:
Yet if a medical breakthrough were to emerge—and especially effective anti-aging therapies—then the optimal time to launch AGI could be pushed out considerably. In principle, such a breakthrough could come from either pre-AGI forms of AI (or specialized AGI applications that don’t require full deployment) or medical progress occurring independently of AI. Such developments are more plausible in long-timeline scenarios where AGI is not developed for several decades.
Note that for this effect to occur, it is not necessary for the improvement in background mortality to actually take place prior to or immediately upon entering Phase 2. In principle, the shift in optimal timelines could occur if an impending lowering of mortality becomes foreseeable; since this would immediately increase our expected lifespan under pre-launch conditions. For example, suppose we became confident that the rate of age-related decline will drop by 90% within 5 years (even without deploying AGI). It might then make sense to favor longer postponements—e.g. launching AGI in 50 years, when AI safety progress has brought the risk level down to a minimal level—since most of us could then still expect to be alive at that time. In this case, the 50 years of additional AI safety progress would be bought at the comparative bargain price of a death risk equivalent to waiting less than 10 years under current mortality conditions.
Bostrom is explicitly acknowledging here that non-ASI life extension would be a game-changer. He says the optimal launch time “could be pushed out considerably,” even to 50 years. He acknowledges it could come from pre-AGI AI or independent medical progress. He even notes it doesn’t need to happen yet, just become foreseeable, to shift the calculus dramatically!
And then he just… moves on. He never examines the actual likelihood of it!
He’s essentially saying “if this thing happened it would massively change my conclusions” without then investigating how likely it is, in a paper that is otherwise obsessively thorough about parameterizing uncertainty.
Compare this to how he handles AI safety progress. He doesn’t just say “if safety progress is fast, you should launch sooner.” He models four subphases with different rates, runs eight scenarios, builds a POMDP, computes optimal policies under uncertainty. He treats safety progress as a variable to be estimated and integrated over.
Non-ASI life extension gets two paragraphs of qualitative acknowledgment and a sensitivity table. In a paper that’s supposed to be answering “when should we launch,” the probability of the single factor he admits would “push out [timing] considerably” is left nearly unexamined, in my view.
So when a reader looks at the main tables and sees “launch ASAP” or close to it across large swaths of parameter space, that conclusion is implicitly assuming near 0% chance of non-ASI life extension. The Shifting Mortality Rates section tells you the conclusion would change if that assumption is wrong, but never really examines why he believes it is wrong, or what makes him certain or uncertain.
Which is exactly the question a paper about optimal timing from a person-affecting stance should be engaging with, in my view.
Does that make more sense?
Appreciate the remarks. Would look forward to a numerical forecast breakdown if you ever have the time to tackle it.