Is your idea that “gradual disempowerment” isn’t a real problem or that it’s a distraction from actual issues? I’ve heard arguments for both, so I’m not sure what the details of your beliefs are. Personally, I see “gradual disempowerment” as a process that has already begun, but the main danger is AI deciding we should die, not humans living in comfort while all the real power is held by AI.
assorted varieties of gradual disempowerment do seem like genuine long term threats
however, by the nature of the idea, it involves talking a lot about relatively small present-day harms from AI
therefore gradual disempowerment is highly at-risk of being coopted by people who mostly just want to talk about present day harms, distracting from both AI x-risk overall and even perhaps from gradual-disempowerment-related x-risk
Is your idea that “gradual disempowerment” isn’t a real problem or that it’s a distraction from actual issues? I’ve heard arguments for both, so I’m not sure what the details of your beliefs are. Personally, I see “gradual disempowerment” as a process that has already begun, but the main danger is AI deciding we should die, not humans living in comfort while all the real power is held by AI.
The impression I got from Ngo’s post is that:
assorted varieties of gradual disempowerment do seem like genuine long term threats
however, by the nature of the idea, it involves talking a lot about relatively small present-day harms from AI
therefore gradual disempowerment is highly at-risk of being coopted by people who mostly just want to talk about present day harms, distracting from both AI x-risk overall and even perhaps from gradual-disempowerment-related x-risk