My read of Bostrom’s intent is that s-risks are deliberately excluded because they fall under the “arcane” category of considerations (per Evaluative Framework section), and this is supposed to be looking simply at Overton Window tradeoffs around lives saved.
However, I think you could still make a fair argument that s-risks could fall within the Overton Window if framed correctly, ex. “consider the possibility your ideological/political enemies win forever”. This is already part of the considerations being made by AI labs and relevant governments in as simple terms as US vs. China.[1] Still, I think the narrower analysis done by Bostrom here is still interesting.
One might argue this is not a “real” s-risk, but ex. Anthropic’s Dario Amodei seems pretty willing to risk the destruction of humanity over China reaching ASI first, according to his public statements, so I think it counts as a meaningful consideration in the public discourse outside of mere lives saved/lost.
Fates worse than death are hardly arcane. They occur every day. Fates far worse than what is possible now are not any more arcane than the mentioned hypothetical ability to cure all disease and aging via cellular-level nanomedicine. In fact, the capability to inflict fates much, much worse than death on people is straight-up implied by the medical capabilities he’s imagining here.
My read of Bostrom’s intent is that s-risks are deliberately excluded because they fall under the “arcane” category of considerations (per Evaluative Framework section), and this is supposed to be looking simply at Overton Window tradeoffs around lives saved.
However, I think you could still make a fair argument that s-risks could fall within the Overton Window if framed correctly, ex. “consider the possibility your ideological/political enemies win forever”. This is already part of the considerations being made by AI labs and relevant governments in as simple terms as US vs. China.[1] Still, I think the narrower analysis done by Bostrom here is still interesting.
One might argue this is not a “real” s-risk, but ex. Anthropic’s Dario Amodei seems pretty willing to risk the destruction of humanity over China reaching ASI first, according to his public statements, so I think it counts as a meaningful consideration in the public discourse outside of mere lives saved/lost.
Fates worse than death are hardly arcane. They occur every day. Fates far worse than what is possible now are not any more arcane than the mentioned hypothetical ability to cure all disease and aging via cellular-level nanomedicine. In fact, the capability to inflict fates much, much worse than death on people is straight-up implied by the medical capabilities he’s imagining here.