I think it’s Bostrom’s notion of existential catastrophe that introduced the conflation of extinction with losing control over most of the cosmic endowment. The “curtail long-term potential” term is ambiguous between things like making current technological level unachievable (with something like a civilization-decimating pandemic that somehow can be survived but never truly recovered from), and making hypothetical CEV-guided capture of future lightcome unachievable, while still granting something like CEV-guided capture of merely our galaxy.
This was cemented in the common use by arguments about AI risk that boiled down to lack of the full CEV-guided future lightcone almost certainly implying extinction, so that there is little use for the category of outcomes somewhere in the middle. But recently there are possibilities like even significantly-alien-on-reflection LLMs capturing the cosmic endowment for themselves without exterminating humanity and possibly even gifting some post-singularity boons like uploading. This makes the non-extinction existential catastrophe a plausible outcome, one much preferable to an extinction existential catastrophe (which still seems likely with even more alien AGI designs that are not far behind, or with LLMs themselves failing the AI risk resistance check by building unaligned AGIs).
I think it’s Bostrom’s notion of existential catastrophe that introduced the conflation of extinction with losing control over most of the cosmic endowment. The “curtail long-term potential” term is ambiguous between things like making current technological level unachievable (with something like a civilization-decimating pandemic that somehow can be survived but never truly recovered from), and making hypothetical CEV-guided capture of future lightcome unachievable, while still granting something like CEV-guided capture of merely our galaxy.
This was cemented in the common use by arguments about AI risk that boiled down to lack of the full CEV-guided future lightcone almost certainly implying extinction, so that there is little use for the category of outcomes somewhere in the middle. But recently there are possibilities like even significantly-alien-on-reflection LLMs capturing the cosmic endowment for themselves without exterminating humanity and possibly even gifting some post-singularity boons like uploading. This makes the non-extinction existential catastrophe a plausible outcome, one much preferable to an extinction existential catastrophe (which still seems likely with even more alien AGI designs that are not far behind, or with LLMs themselves failing the AI risk resistance check by building unaligned AGIs).