Thanks Justin! This is an interesting perspective. I’d enjoy seeing a compilation of different perspectives on ensuring AI alignment. (Another recurrent example would be the cybersecurity perspective on AI safety.)
Bureaucratization is the ultimate specific means to turn a mutually agreed upon community action rooted in subjective feeling into action rooted in a rational agreement by mutual consent.
This sounds a lot like the general situation of creating moral or judicial systems for a society. (When it works well.)
The principle of fixed competencies
The principle of hierarchically organized positions
Interestingly, these may go counter to Agile-associated practices and some practices I would consider generally good. It seems to be good to cultivate specialities, but also to cultivate some breadth in competencies. And to nurture bottom-up flows! Hierarchy has its limitations.
Thanks for the comment, David! It also caused me to go back and read this post again, which sparked quite a few old flames in the brain.
I agree that a collection of different approaches to ensuring AI alignment would be interesting! This is something that I’m hoping (now planning!) to capture in part with my exploration of scenario modeling that’s coming down the pipe. But, a brief overview of the different analytical approaches to AI alignment, would be helpful (if it doesn’t already exist in an updated form that I’m unaware of).
I agree with your insight that Weber’s description here can be generalized to moral and judicial systems for society. I suspect if we went looking into Weber’s writing we might find similar analogies here as well.
I agree with your comment on the limitations of hierarchy for human bureaucracies. Fixed competencies and hierarchical flows benefit from bottom up information flows and agile adaptation. However, I think this reinforces my point about machine beamte and AGI controlled through this method. For the same sorts of benefits of agility and modification by human organizations, you might think that we would want to restrict these things for machine agents to deliberately sacrifice benefits from adaptation in favor of aligned interests and controllability.
Thanks for the feedback! I can imagine some more posts in this direction non the future.
Thanks Justin! This is an interesting perspective. I’d enjoy seeing a compilation of different perspectives on ensuring AI alignment. (Another recurrent example would be the cybersecurity perspective on AI safety.)
This sounds a lot like the general situation of creating moral or judicial systems for a society. (When it works well.)
Interestingly, these may go counter to Agile-associated practices and some practices I would consider generally good. It seems to be good to cultivate specialities, but also to cultivate some breadth in competencies. And to nurture bottom-up flows! Hierarchy has its limitations.
Thanks for the comment, David! It also caused me to go back and read this post again, which sparked quite a few old flames in the brain.
I agree that a collection of different approaches to ensuring AI alignment would be interesting! This is something that I’m hoping (now planning!) to capture in part with my exploration of scenario modeling that’s coming down the pipe. But, a brief overview of the different analytical approaches to AI alignment, would be helpful (if it doesn’t already exist in an updated form that I’m unaware of).
I agree with your insight that Weber’s description here can be generalized to moral and judicial systems for society. I suspect if we went looking into Weber’s writing we might find similar analogies here as well.
I agree with your comment on the limitations of hierarchy for human bureaucracies. Fixed competencies and hierarchical flows benefit from bottom up information flows and agile adaptation. However, I think this reinforces my point about machine beamte and AGI controlled through this method. For the same sorts of benefits of agility and modification by human organizations, you might think that we would want to restrict these things for machine agents to deliberately sacrifice benefits from adaptation in favor of aligned interests and controllability.
Thanks for the feedback! I can imagine some more posts in this direction non the future.