I think both sets of bullets (multi-multi (eco?)systems either replicating cooperation-etc-as-we-know-it or making new forms of cooperation etc) are important, I think I’ll call them prosaic cooperation and nonprosaic cooperation, respectively, going forward. When I say “cooperation etc.” I mean cooperation, coordination, competition, negotiation, compromise.
You’ve provided crisp scenarios, so thanks for that!
In some sense “most” of that work will presumably be done by AI systems, but doing the work ourselves may unlock those benefits much earlier.
But if the AI does that work there will be an interpretability problem, an inferential distance. I’m imagining people ask a somewhat single-single aligned AI for solutions to multi-multi problems and the black box returns something inscrutable. Putting ourselves in a position where we can grok what its recommendations are seems aligned with researching it for ourselves so we won’t have to ask the black box in the first place, though this probably only applies to prosaic cooperation.
I think both sets of bullets (multi-multi (eco?)systems either replicating cooperation-etc-as-we-know-it or making new forms of cooperation etc) are important, I think I’ll call them prosaic cooperation and nonprosaic cooperation, respectively, going forward. When I say “cooperation etc.” I mean cooperation, coordination, competition, negotiation, compromise.
You’ve provided crisp scenarios, so thanks for that!
But if the AI does that work there will be an interpretability problem, an inferential distance. I’m imagining people ask a somewhat single-single aligned AI for solutions to multi-multi problems and the black box returns something inscrutable. Putting ourselves in a position where we can grok what its recommendations are seems aligned with researching it for ourselves so we won’t have to ask the black box in the first place, though this probably only applies to prosaic cooperation.