I believe this is the response you’re referring to, interestingly within it he says
I do worry about human power grabs: some humans obtaining greatly more power as enabled by AI (even if we have no serious alignment issues). However, I don’t think this matches the story you describe and the mitigations seem substantially different than what you seem to be imagining.
Yes, GD largely imagines power concentrating directly into the hands of AI-systems themselves in absentia of a small group of people, but in the context of strictly caring about disempowerment the only difference between the two scenarios will be in the agenda of those in control, not the actual disempowerment itself.
This is the problem I was referring to that is independent of alignment/corrigibility, apologies for the lack of clarity.
I believe this is the response you’re referring to, interestingly within it he says
Yes, GD largely imagines power concentrating directly into the hands of AI-systems themselves in absentia of a small group of people, but in the context of strictly caring about disempowerment the only difference between the two scenarios will be in the agenda of those in control, not the actual disempowerment itself.
This is the problem I was referring to that is independent of alignment/corrigibility, apologies for the lack of clarity.