Gillian Hadfield discusses the importance of cooperative intelligence and normative systems for AI. She argues that humans have evolved the ability to create and enforce norms through third-party punishment, which allows for stable groups and cooperation. However, current AI approaches focus too much on individual optimization. Instead, AI systems should learn to participate in and maintain normative infrastructure, rather than simply mimic existing human behavior. Understanding the generative process behind human norms and the role of normative reasoning may help build more cooperative AI systems. Silly rules, though seemingly unimportant, can serve as signals of group compliance and help maintain group stability.
Cooperative intelligence is fundamental to human intelligence. It’s not just about task completion and optimization, but also the capacity for cooperation with others.
The most fundamental form of cooperation humans engage in is creating and maintaining the normative infrastructure of cooperative groups through norms and enforcement.
Third party enforcement expands the set of possible solutions for cooperation because almost any equilibrium can be achieved if the group is coordinated to enforce norms.
Silly rules, or rules with no direct impact on welfare, can help stabilize groups by signaling willingness to comply with and enforce important rules.
The plasticity and capacity for changing content while remaining stable makes normativity a valuable tool.
Building AI that can participate in and be competent actors within normative infrastructure is more complex than just stuffing norms into them.
AI should observe how uncertainty about punishable actions is resolved and where decision making around norms comes from.
Normativity in humans involves giving reasons and assessing what constitutes good reasons, which is itself subject to normative structure.
Internal moral reasoning can represent the group’s evaluation of one’s behavior and predict third party enforcement.
The model of the utility maximizing selfish individual is not representative of how actual humans in groups behave.
Gillian Hadfield discusses the importance of cooperative intelligence and normative systems for AI. She argues that humans have evolved the ability to create and enforce norms through third-party punishment, which allows for stable groups and cooperation. However, current AI approaches focus too much on individual optimization. Instead, AI systems should learn to participate in and maintain normative infrastructure, rather than simply mimic existing human behavior. Understanding the generative process behind human norms and the role of normative reasoning may help build more cooperative AI systems. Silly rules, though seemingly unimportant, can serve as signals of group compliance and help maintain group stability.
Cooperative intelligence is fundamental to human intelligence. It’s not just about task completion and optimization, but also the capacity for cooperation with others.
The most fundamental form of cooperation humans engage in is creating and maintaining the normative infrastructure of cooperative groups through norms and enforcement.
Third party enforcement expands the set of possible solutions for cooperation because almost any equilibrium can be achieved if the group is coordinated to enforce norms.
Silly rules, or rules with no direct impact on welfare, can help stabilize groups by signaling willingness to comply with and enforce important rules.
The plasticity and capacity for changing content while remaining stable makes normativity a valuable tool.
Building AI that can participate in and be competent actors within normative infrastructure is more complex than just stuffing norms into them.
AI should observe how uncertainty about punishable actions is resolved and where decision making around norms comes from.
Normativity in humans involves giving reasons and assessing what constitutes good reasons, which is itself subject to normative structure.
Internal moral reasoning can represent the group’s evaluation of one’s behavior and predict third party enforcement.
The model of the utility maximizing selfish individual is not representative of how actual humans in groups behave.
https://www.youtube.com/watch?v=BCQJ2G3_Hn4