FWIW “existential catastrophe” isn’t much better. I try to use it as Bostrom usually defined it, with the exception that I don’t consider “the simulation gets shuts down” to be an existential catastrophe. Bostrom defined it that way, but in other forecasting domains when you’re giving probabilities that some event will happen next year, you don’t lower the probability based on the probability that the universe won’t exist then because you’re in a simulation, and so I don’t think we should do that when forecasting the future of humanity either.
Then, as you allude to, there’s also the problem that a future in which 99% of our potential value is permanently beyond our reach, but we go on to realize the remaining 1% of our future, would be an incredibly amazing unbelievably good future—yet should probably still qualify as an existential catastrophe given the definition. People have asked before what percentage of our potential lost would qualify as “drastic” (50% or 99% or what), and for practical purposes I usually mean something like >99.999%. That is, I don’t count extremely good outcomes as existential catastrophes merely for being very suboptimal.
I think a better way to define the class of bad outcomes may be in terms of recent-history value. Like if we say “one util” is the intrinsic value of the happiest billion human lives in 2024, then a future of Earth-originating life which is only 1000 utils or less would be extremely suboptimal. This would include all near-term extinction scenarios where humanity fails to blossom in the meantime, but also scenarios where humanity doesn’t go extinct for a long time still but also fails to come anywhere close to achieving its potential (i.e. it includes many permanent disempowerment scenarios, etc).
I agree; it’s a terrible term.
FWIW “existential catastrophe” isn’t much better. I try to use it as Bostrom usually defined it, with the exception that I don’t consider “the simulation gets shuts down” to be an existential catastrophe. Bostrom defined it that way, but in other forecasting domains when you’re giving probabilities that some event will happen next year, you don’t lower the probability based on the probability that the universe won’t exist then because you’re in a simulation, and so I don’t think we should do that when forecasting the future of humanity either.
Then, as you allude to, there’s also the problem that a future in which 99% of our potential value is permanently beyond our reach, but we go on to realize the remaining 1% of our future, would be an incredibly amazing unbelievably good future—yet should probably still qualify as an existential catastrophe given the definition. People have asked before what percentage of our potential lost would qualify as “drastic” (50% or 99% or what), and for practical purposes I usually mean something like >99.999%. That is, I don’t count extremely good outcomes as existential catastrophes merely for being very suboptimal.
I think a better way to define the class of bad outcomes may be in terms of recent-history value. Like if we say “one util” is the intrinsic value of the happiest billion human lives in 2024, then a future of Earth-originating life which is only 1000 utils or less would be extremely suboptimal. This would include all near-term extinction scenarios where humanity fails to blossom in the meantime, but also scenarios where humanity doesn’t go extinct for a long time still but also fails to come anywhere close to achieving its potential (i.e. it includes many permanent disempowerment scenarios, etc).