Also, in practical terms, gradual disempowerment does not seem particularly convenient set of ideas for justifying that working in an AGI company on something very prosaic which helps the company is the best thing to do.
The bigger issue, as Jackson Wagner says is that there’s a very real risk that will be coopted by people who want to talk mostly about present-day harms of AI, and thus at best siphoning resources from actually useful work on gradual disempowerment threats/AI x-risk in general, and at worst creating polarization around gradual disempowerment with one party supporting gradual disempowerment of humans and another party opposing gradual disempowerment, while the anti-gradual disempowerment party/people become totally ineffective at dealing with the problem because it’s been taken over by omnicause dynamics.
The bigger issue, as Jackson Wagner says is that there’s a very real risk that will be coopted by people who want to talk mostly about present-day harms of AI, and thus at best siphoning resources from actually useful work on gradual disempowerment threats/AI x-risk in general, and at worst creating polarization around gradual disempowerment with one party supporting gradual disempowerment of humans and another party opposing gradual disempowerment, while the anti-gradual disempowerment party/people become totally ineffective at dealing with the problem because it’s been taken over by omnicause dynamics.