These quoted passages made me curious what cooperation-focused folks like David Manheim and Ivan Vendrov and others think of this essay (I’m not plugged into the “cooperation scene” at all so I’m probably missing out on most thinkers / commenters):
We proceed from the core assumption that stable human coexistence (a precondition for flourishing), particularly in diverse societies, is made possible not by achieving rational convergence on values, but by relying on practical social technologies – like conventions, norms, and institutions – to manage conflict and enable coordination. We do not see persistent disagreement as problematic or puzzling. For the purpose of effectively navigating coexistence, we choose to treat the disagreements as basic elements to work with and proceed as if fundamental differences are enduring features of our shared landscape. We will argue that this approach is more practical than alternatives which view disagreements as mere errors on a path to rational convergence. …
A key insight we can draw then is that what holds humans together despite profound disagreements is not value convergence, but practical mechanisms for coexistence – which we see as social technologies. …
This shift has profound implications for how we conceptualize AI safety and ethics. Instead of asking “How do we align AI with human values?” – a question presupposing a single, coherent set of “human values” that can be discovered and encoded – we should ask the more fundamental question that humans have grappled with for millennia: “How can we live together?” This perspective embraces pluralism, contextualism, contingency, and the messy reality of human social life – the vibrant, diverse quilt we’re all continually sewing together. Crucially, the two questions are not equivalent. “How can we live together?” acknowledges that a unified set of values cannot function as a practical foundation for society, and that the challenge is to build systems (both social and technical) that can navigate a landscape of deep and persistent disagreement. …
The part of me that finds the cooperation aesthetic explored in Manheim’s and Vendrov’s writings appealing can’t think of a way to reconcile the nice-sounding polychrome patchwork quilt vision with the part of me that thinks some things are just moral atrocities full stop and would push back against (say) communities who consider them an essential part of their culture instead of compromising with them. Holden’s future-proof ethics feels like a sort of preferable middle ground: systemisation + “thin utilitarianism” + sentientism as guiding principles for moral progress, but not a full spec of the sort the “axiom of moral convergence” implicitly suggests exists.
These quoted passages made me curious what cooperation-focused folks like David Manheim and Ivan Vendrov and others think of this essay (I’m not plugged into the “cooperation scene” at all so I’m probably missing out on most thinkers / commenters):
The part of me that finds the cooperation aesthetic explored in Manheim’s and Vendrov’s writings appealing can’t think of a way to reconcile the nice-sounding polychrome patchwork quilt vision with the part of me that thinks some things are just moral atrocities full stop and would push back against (say) communities who consider them an essential part of their culture instead of compromising with them. Holden’s future-proof ethics feels like a sort of preferable middle ground: systemisation + “thin utilitarianism” + sentientism as guiding principles for moral progress, but not a full spec of the sort the “axiom of moral convergence” implicitly suggests exists.