The good guys implement deliberate X-risk reduction efforts to stave off non-AI X-risks. Those might include a global nanotech immune system, cheap and rigorous biotech tests and safeguards, an asteroid defense system, nuclear safeguards, etc.
Why are these part of the “fantastic scenario”? An asteroid defense system will almost certainly not be needed: the overwhelmingly likely case (backed up by telescope observations and outside view statistics) is that there won’t be any big threatening asteroids over the relevant timescales.
Similarly, many of the other scenarios you list are concerned with differences that would slightly (or perhaps substantially for some) shift the probability of global outcomes, not outcomes. That’s pretty different from a central requirement of a successful outcome. The framework here could be clearer.
Why are these part of the “fantastic scenario”? An asteroid defense system will almost certainly not be needed: the overwhelmingly likely case (backed up by telescope observations and outside view statistics) is that there won’t be any big threatening asteroids over the relevant timescales.
I imagined the ‘fantastic scenario’ as being one in which “The good guys implement deliberate X-risk reduction efforts to stave off non-AI X-risks”. I meant to cite “a global nanotech immune system, cheap and rigorous biotech tests and safeguards, an asteroid defense system, nuclear safeguards” as examples of “X-risk reduction efforts” in order to fill out the category, regardless of the individual relevance of any of the examples. Anyway, it’s confusing, and I should remove it.
Similarly, many of the other scenarios you list are concerned with differences that would slightly (or perhaps substantially for some) shift the probability of global outcomes, not outcomes. That’s pretty different from a central requirement of a successful outcome.
Yeah, I think I want a picture of what the world looks like where the probability of success was as high as possible, and then we succeeded. I think the central requirements of successful outcomes are far fewer, and less helpful for figuring out where to go.
Why are these part of the “fantastic scenario”? An asteroid defense system will almost certainly not be needed: the overwhelmingly likely case (backed up by telescope observations and outside view statistics) is that there won’t be any big threatening asteroids over the relevant timescales.
Similarly, many of the other scenarios you list are concerned with differences that would slightly (or perhaps substantially for some) shift the probability of global outcomes, not outcomes. That’s pretty different from a central requirement of a successful outcome. The framework here could be clearer.
I imagined the ‘fantastic scenario’ as being one in which “The good guys implement deliberate X-risk reduction efforts to stave off non-AI X-risks”. I meant to cite “a global nanotech immune system, cheap and rigorous biotech tests and safeguards, an asteroid defense system, nuclear safeguards” as examples of “X-risk reduction efforts” in order to fill out the category, regardless of the individual relevance of any of the examples. Anyway, it’s confusing, and I should remove it.
Yeah, I think I want a picture of what the world looks like where the probability of success was as high as possible, and then we succeeded. I think the central requirements of successful outcomes are far fewer, and less helpful for figuring out where to go.