I think you (and Bostrom) are failing pretty hard at distinguishing “person-affecting views” from “an individual who is over 60 years old and maybe has cancer” or similar.
If someone was actually making arguments specifically for the benefit of all the people currently alive today and next generation, I would expect very different ones from those in this paper. You could try to reasonably try to say that 96% chance of the world ending is acceptable from an 80 year old person who doesn’t care about their younger family or friends or others, but I don’t think it’s a serious argument.
For example, you would have to also do the math for the likelihood of biotech advancements that help currently living 40 year olds or 30 year olds hit the immortality event horizon, as an alternative scenario to “either race for AGI or everyone alive today dies.” If you don’t do things like that, then it doesn’t seem reasonable to argue that this is all in service of a perspective for those alive today vs “hypothetical people”… and of course the conclusion is going to be pretty badly lopsided toward taking high risks, if no other path to saving lives is seriously considered.
Separately, I think you’re straw manning pretty hard if you think Lesswrong readers don’t put serious weight on the lives of themselves, their parents, and their family members. A lot of people in this community suffer from some form of existential dread related to short timelines, and they are emotionally affected quite hard from the potential loss of their lives, and their family’s lives, and their children’s lives… not some abstract notion of “far future people.” That is often a part of their intellectual calculations and posts, but it would be a huge mistake to assume it’s the center of their lived emotional experience.
If someone was actually making arguments specifically for the benefit of all the people currently alive today and next generation, I would expect very different ones from those in this paper. You could try to reasonably try to say that 96% chance of the world ending is acceptable from an 80 year old person who doesn’t care about their younger family or friends or others, but I don’t think it’s a serious argument.
For example, you would have to also do the math for the likelihood of biotech advancements that help currently living 40 year olds or 30 year olds hit the immortality event horizon, as an alternative scenario to “either race for AGI or everyone alive today dies.” If you don’t do things like that, then it doesn’t seem reasonable to argue that this is all in service of a perspective for those alive today vs “hypothetical people”… and of course the conclusion is going to be pretty badly lopsided toward taking high risks, if no other path to saving lives is seriously considered.
I suspect you either lack a clear understanding of the argument made in Bostrom’s post, or you are purposely choosing to not engage with its substance beyond the first thousand words or so.
Bostrom is not claiming that a 96% chance of catastrophe is acceptable as a bottom line. That figure came only from his simplest go/no-go model. The bulk of the post extends this model with diminishing marginal utility, temporal discounting, and other complications, which can push toward longer wait times and more conservative risk tolerance. Moreover, your specific objection, that he doesn’t consider alternative paths to life extension without AGI, is false. In fact, he addressed this objection directly in his “Shifting Mortality Rates” section, where he models scenarios in which non-AGI medical breakthroughs reduce background mortality before deployment, and shows this does lengthen optimal timelines. He also explicitly acknowledges in his distributional analysis that the argument differentially benefits the old and sick, and engages with that fact rather than ignoring it.
I find it frustrating when someone dismisses an argument as unserious while clearly not engaging with what was actually said. This makes productive dialogue nearly impossible: no matter how carefully a point is made, the other person ignores it and instead argues against a version they invented in their own head and projected onto the original author.
I’m sorry I’ve given the impression of not engaging with what was actually said. Let me try to say what I meant more clearly:
The Shifting Mortality Rates section asks: “If background mortality drops, how does that change optimal timing?” It then runs the math for a scenario where mortality plummets all the way to 1/1400 upon entering Phase 2, and shows the pause durations get somewhat longer.
What it doesn’t ask is: “How likely is it that background mortality drops meaningfully in the next 20-40 years without ASI, and what does that do to the expected value calculation?”
I expect the latter because it’s actually pretty important? Like, look at these paragraphs in particular:
Yet if a medical breakthrough were to emerge—and especially effective anti-aging therapies—then the optimal time to launch AGI could be pushed out considerably. In principle, such a breakthrough could come from either pre-AGI forms of AI (or specialized AGI applications that don’t require full deployment) or medical progress occurring independently of AI. Such developments are more plausible in long-timeline scenarios where AGI is not developed for several decades.
Note that for this effect to occur, it is not necessary for the improvement in background mortality to actually take place prior to or immediately upon entering Phase 2. In principle, the shift in optimal timelines could occur if an impending lowering of mortality becomes foreseeable; since this would immediately increase our expected lifespan under pre-launch conditions. For example, suppose we became confident that the rate of age-related decline will drop by 90% within 5 years (even without deploying AGI). It might then make sense to favor longer postponements—e.g. launching AGI in 50 years, when AI safety progress has brought the risk level down to a minimal level—since most of us could then still expect to be alive at that time. In this case, the 50 years of additional AI safety progress would be bought at the comparative bargain price of a death risk equivalent to waiting less than 10 years under current mortality conditions.
Bostrom is explicitly acknowledging here that non-ASI life extension would be a game-changer. He says the optimal launch time “could be pushed out considerably,” even to 50 years. He acknowledges it could come from pre-AGI AI or independent medical progress. He even notes it doesn’t need to happen yet, just become foreseeable, to shift the calculus dramatically!
And then he just… moves on. He never examines the actual likelihood of it!
He’s essentially saying “if this thing happened it would massively change my conclusions” without then investigating how likely it is, in a paper that is otherwise obsessively thorough about parameterizing uncertainty.
Compare this to how he handles AI safety progress. He doesn’t just say “if safety progress is fast, you should launch sooner.” He models four subphases with different rates, runs eight scenarios, builds a POMDP, computes optimal policies under uncertainty. He treats safety progress as a variable to be estimated and integrated over.
Non-ASI life extension gets two paragraphs of qualitative acknowledgment and a sensitivity table. In a paper that’s supposed to be answering “when should we launch,” the probability of the single factor he admits would “push out [timing] considerably” is left nearly unexamined, in my view.
So when a reader looks at the main tables and sees “launch ASAP” or close to it across large swaths of parameter space, that conclusion is implicitly assuming near 0% chance of non-ASI life extension. The Shifting Mortality Rates section tells you the conclusion would change if that assumption is wrong, but never really examines why he believes it is wrong, or what makes him certain or uncertain.
Which is exactly the question a paper about optimal timing from a person-affecting stance should be engaging with, in my view.
I think you (and Bostrom) are failing pretty hard at distinguishing “person-affecting views” from “an individual who is over 60 years old and maybe has cancer” or similar.
If someone was actually making arguments specifically for the benefit of all the people currently alive today and next generation, I would expect very different ones from those in this paper. You could try to reasonably try to say that 96% chance of the world ending is acceptable from an 80 year old person who doesn’t care about their younger family or friends or others, but I don’t think it’s a serious argument.
For example, you would have to also do the math for the likelihood of biotech advancements that help currently living 40 year olds or 30 year olds hit the immortality event horizon, as an alternative scenario to “either race for AGI or everyone alive today dies.” If you don’t do things like that, then it doesn’t seem reasonable to argue that this is all in service of a perspective for those alive today vs “hypothetical people”… and of course the conclusion is going to be pretty badly lopsided toward taking high risks, if no other path to saving lives is seriously considered.
Separately, I think you’re straw manning pretty hard if you think Lesswrong readers don’t put serious weight on the lives of themselves, their parents, and their family members. A lot of people in this community suffer from some form of existential dread related to short timelines, and they are emotionally affected quite hard from the potential loss of their lives, and their family’s lives, and their children’s lives… not some abstract notion of “far future people.” That is often a part of their intellectual calculations and posts, but it would be a huge mistake to assume it’s the center of their lived emotional experience.
I suspect you either lack a clear understanding of the argument made in Bostrom’s post, or you are purposely choosing to not engage with its substance beyond the first thousand words or so.
Bostrom is not claiming that a 96% chance of catastrophe is acceptable as a bottom line. That figure came only from his simplest go/no-go model. The bulk of the post extends this model with diminishing marginal utility, temporal discounting, and other complications, which can push toward longer wait times and more conservative risk tolerance. Moreover, your specific objection, that he doesn’t consider alternative paths to life extension without AGI, is false. In fact, he addressed this objection directly in his “Shifting Mortality Rates” section, where he models scenarios in which non-AGI medical breakthroughs reduce background mortality before deployment, and shows this does lengthen optimal timelines. He also explicitly acknowledges in his distributional analysis that the argument differentially benefits the old and sick, and engages with that fact rather than ignoring it.
I find it frustrating when someone dismisses an argument as unserious while clearly not engaging with what was actually said. This makes productive dialogue nearly impossible: no matter how carefully a point is made, the other person ignores it and instead argues against a version they invented in their own head and projected onto the original author.
I’m sorry I’ve given the impression of not engaging with what was actually said. Let me try to say what I meant more clearly:
The Shifting Mortality Rates section asks: “If background mortality drops, how does that change optimal timing?” It then runs the math for a scenario where mortality plummets all the way to 1/1400 upon entering Phase 2, and shows the pause durations get somewhat longer.
What it doesn’t ask is: “How likely is it that background mortality drops meaningfully in the next 20-40 years without ASI, and what does that do to the expected value calculation?”
I expect the latter because it’s actually pretty important? Like, look at these paragraphs in particular:
Bostrom is explicitly acknowledging here that non-ASI life extension would be a game-changer. He says the optimal launch time “could be pushed out considerably,” even to 50 years. He acknowledges it could come from pre-AGI AI or independent medical progress. He even notes it doesn’t need to happen yet, just become foreseeable, to shift the calculus dramatically!
And then he just… moves on. He never examines the actual likelihood of it!
He’s essentially saying “if this thing happened it would massively change my conclusions” without then investigating how likely it is, in a paper that is otherwise obsessively thorough about parameterizing uncertainty.
Compare this to how he handles AI safety progress. He doesn’t just say “if safety progress is fast, you should launch sooner.” He models four subphases with different rates, runs eight scenarios, builds a POMDP, computes optimal policies under uncertainty. He treats safety progress as a variable to be estimated and integrated over.
Non-ASI life extension gets two paragraphs of qualitative acknowledgment and a sensitivity table. In a paper that’s supposed to be answering “when should we launch,” the probability of the single factor he admits would “push out [timing] considerably” is left nearly unexamined, in my view.
So when a reader looks at the main tables and sees “launch ASAP” or close to it across large swaths of parameter space, that conclusion is implicitly assuming near 0% chance of non-ASI life extension. The Shifting Mortality Rates section tells you the conclusion would change if that assumption is wrong, but never really examines why he believes it is wrong, or what makes him certain or uncertain.
Which is exactly the question a paper about optimal timing from a person-affecting stance should be engaging with, in my view.
Does that make more sense?
(I added a few remarks on this in my reply to quetzal_rainbow, although—sorry—nothing numerical.)
Appreciate the remarks. Would look forward to a numerical forecast breakdown if you ever have the time to tackle it.