I believe this is the response you’re referring to, interestingly within it he says
I do worry about human power grabs: some humans obtaining greatly more power as enabled by AI (even if we have no serious alignment issues). However, I don’t think this matches the story you describe and the mitigations seem substantially different than what you seem to be imagining.
Yes, GD largely imagines power concentrating directly into the hands of AI-systems themselves in absentia of a small group of people, but in the context of strictly caring about disempowerment the only difference between the two scenarios will be in the agenda of those in control, not the actual disempowerment itself.
This is the problem I was referring to that is independent of alignment/corrigibility, apologies for the lack of clarity.
Roughly speaking, the first models astounded the layperson and were scoffed at by the specialist. Between releases of current models, the layperson can’t see much improvement while specialists are astounded, roughly speaking of course.
This take probably stems from knuth’s essay, its existence possibly being some of the most depressing news I’ve ever received.