If this is true, then benevolent ruler AI would immediately build and give power over to a condition of high-agency transhumanism, and a coordinated center* of mostly non-human decisionmaking probably actually is the only practical way to fairly/equally/peacefully globally distribute the instruments for such a thing. Does the author seem to have considered this?
but if the benevolent ruler ai is necessarily self-invalidating, it seems likely that most attempts to align one don’t actually align it and instead result in making a not-actually-benevolent ruler ai, and if you want to make a benevolent ai, it never being designed to be a ruler in the first place seems just better
Do you expect there to be parties who would try to align it towards having the intuitive character of a dictator? I don’t. I’ve been expecting alignment like “be good”. You’d still get a (momentary) prepotent singleton, but I don’t see that as being the alignment target.
This kind of question, the unnecessary complication of the alignment target, has become increasingly relevant. It’s not just mathy pluralistic scifi-readers who’re in this any more...
If this is true, then benevolent ruler AI would immediately build and give power over to a condition of high-agency transhumanism, and a coordinated center* of mostly non-human decisionmaking probably actually is the only practical way to fairly/equally/peacefully globally distribute the instruments for such a thing. Does the author seem to have considered this?
but if the benevolent ruler ai is necessarily self-invalidating, it seems likely that most attempts to align one don’t actually align it and instead result in making a not-actually-benevolent ruler ai, and if you want to make a benevolent ai, it never being designed to be a ruler in the first place seems just better
Do you expect there to be parties who would try to align it towards having the intuitive character of a dictator? I don’t. I’ve been expecting alignment like “be good”. You’d still get a (momentary) prepotent singleton, but I don’t see that as being the alignment target.
This kind of question, the unnecessary complication of the alignment target, has become increasingly relevant. It’s not just mathy pluralistic scifi-readers who’re in this any more...
....?! yes?!
/me thinks of a specific country
._. …