The extreme variance of responses/reception to the GD paper indicates that it is an obvious thing for some people (e.g., Zvi in his review of it), whereas for other people it’s a non-issue if you solve alignment/control (I think Ryan Greenblatt’s responses under one of Jan Kulveit’s posts about GD).
So I’d say it’s a legible problem for some (sub)groups and illegible for others, although there are some issues around conceptual engineering of the bridge between GD and orthodox AI X-risk that, as far as I’m aware, no one has nailed down yet.
I believe this is the response you’re referring to, interestingly within it he says
I do worry about human power grabs: some humans obtaining greatly more power as enabled by AI (even if we have no serious alignment issues). However, I don’t think this matches the story you describe and the mitigations seem substantially different than what you seem to be imagining.
Yes, GD largely imagines power concentrating directly into the hands of AI-systems themselves in absentia of a small group of people, but in the context of strictly caring about disempowerment the only difference between the two scenarios will be in the agenda of those in control, not the actual disempowerment itself.
This is the problem I was referring to that is independent of alignment/corrigibility, apologies for the lack of clarity.
The extreme variance of responses/reception to the GD paper indicates that it is an obvious thing for some people (e.g., Zvi in his review of it), whereas for other people it’s a non-issue if you solve alignment/control (I think Ryan Greenblatt’s responses under one of Jan Kulveit’s posts about GD).
So I’d say it’s a legible problem for some (sub)groups and illegible for others, although there are some issues around conceptual engineering of the bridge between GD and orthodox AI X-risk that, as far as I’m aware, no one has nailed down yet.
I believe this is the response you’re referring to, interestingly within it he says
Yes, GD largely imagines power concentrating directly into the hands of AI-systems themselves in absentia of a small group of people, but in the context of strictly caring about disempowerment the only difference between the two scenarios will be in the agenda of those in control, not the actual disempowerment itself.
This is the problem I was referring to that is independent of alignment/corrigibility, apologies for the lack of clarity.