The reason I said that is that “human potential” strictly speaking is indifferent to the values of the humans that make up the potential, and pretty importantly existential risks pretty much have to be against everyone’s instrumental goals in order for the concept to have a workable definition.
In particular, human potential is indifferent to the diversity of human values, so long as there remain humans at all that are alive.
The reason I said that is that “human potential” strictly speaking is indifferent to the values of the humans that make up the potential, and pretty importantly existential risks pretty much have to be against everyone’s instrumental goals in order for the concept to have a workable definition.
In particular, human potential is indifferent to the diversity of human values, so long as there remain humans at all that are alive.
I would agree if there remain humains after a biological catastrophe, I think that’s not a big deal and it’s easy to repopulate the planet.
I think it’s more tricky in the situation above, where most of the economy is run by AI, thought I’m really not sure of this