My own take is summarized by Ryan Greenblatt here and Fabien Roger and myself, which is that it is a problem, but not really an existential threat by itself (but only because humanity’s potential is technically fulfilled if a random billionaire took control over earth and killed almost everyone else except people ideologically aligned to him, and yet AIs still take orders and he personally gets uploaded and has a very rich life):
humanity’s potential is technically fulfilled if a random billionaire took control over earth and killed almost everyone
I find this quite disgusting personally
I think that his ‘very rich life’, and his sbires, would be a terrible impoverishment of human diversity and values. My mental image for this is something like Hitler in his bunker while AIs are terraforming the earth into an inhabitable place.
The reason I said that is that “human potential” strictly speaking is indifferent to the values of the humans that make up the potential, and pretty importantly existential risks pretty much have to be against everyone’s instrumental goals in order for the concept to have a workable definition.
In particular, human potential is indifferent to the diversity of human values, so long as there remain humans at all that are alive.
My own take is summarized by Ryan Greenblatt here and Fabien Roger and myself, which is that it is a problem, but not really an existential threat by itself (but only because humanity’s potential is technically fulfilled if a random billionaire took control over earth and killed almost everyone else except people ideologically aligned to him, and yet AIs still take orders and he personally gets uploaded and has a very rich life):
https://www.lesswrong.com/posts/pZhEQieM9otKXhxmd/gradual-disempowerment-systemic-existential-risks-from#GChLyapXkhuHaBewq
https://www.lesswrong.com/posts/pZhEQieM9otKXhxmd/gradual-disempowerment-systemic-existential-risks-from#GJSdxkc7YfgdzcLRb
https://www.lesswrong.com/posts/pZhEQieM9otKXhxmd/gradual-disempowerment-systemic-existential-risks-from#QCjBC7Ym6Bt9pHHew
https://www.lesswrong.com/posts/pZhEQieM9otKXhxmd/gradual-disempowerment-systemic-existential-risks-from#8yCL9TdDW5KfXkvzh
I find this quite disgusting personally
I think that his ‘very rich life’, and his sbires, would be a terrible impoverishment of human diversity and values. My mental image for this is something like Hitler in his bunker while AIs are terraforming the earth into an inhabitable place.
The reason I said that is that “human potential” strictly speaking is indifferent to the values of the humans that make up the potential, and pretty importantly existential risks pretty much have to be against everyone’s instrumental goals in order for the concept to have a workable definition.
In particular, human potential is indifferent to the diversity of human values, so long as there remain humans at all that are alive.
I would agree if there remain humains after a biological catastrophe, I think that’s not a big deal and it’s easy to repopulate the planet.
I think it’s more tricky in the situation above, where most of the economy is run by AI, thought I’m really not sure of this