but if the benevolent ruler ai is necessarily self-invalidating, it seems likely that most attempts to align one don’t actually align it and instead result in making a not-actually-benevolent ruler ai, and if you want to make a benevolent ai, it never being designed to be a ruler in the first place seems just better
Do you expect there to be parties who would try to align it towards having the intuitive character of a dictator? I don’t. I’ve been expecting alignment like “be good”. You’d still get a (momentary) prepotent singleton, but I don’t see that as being the alignment target.
This kind of question, the unnecessary complication of the alignment target, has become increasingly relevant. It’s not just mathy pluralistic scifi-readers who’re in this any more...
but if the benevolent ruler ai is necessarily self-invalidating, it seems likely that most attempts to align one don’t actually align it and instead result in making a not-actually-benevolent ruler ai, and if you want to make a benevolent ai, it never being designed to be a ruler in the first place seems just better
Do you expect there to be parties who would try to align it towards having the intuitive character of a dictator? I don’t. I’ve been expecting alignment like “be good”. You’d still get a (momentary) prepotent singleton, but I don’t see that as being the alignment target.
This kind of question, the unnecessary complication of the alignment target, has become increasingly relevant. It’s not just mathy pluralistic scifi-readers who’re in this any more...
....?! yes?!
/me thinks of a specific country
._. …