Nice post! However, I think you’re missing a key quantity—the scale of interaction. If the benefits of cooperation are global (as they are in not building AI that kills everyone), assortative interactions do not solve the problem (there is no partner choice, spatial structure, etc. if the scale of effects of the behaviour of relevance are global). In this case some form of punishment, policing, reward, bargains outside the focal game etc. is needed to modify the payoff matrix to get cooperation.
Thanks—and that’s a completely fair point. My focus here has been on systems of pairwise interactions (prisoner’s dilemma) as that seems closer to the original Moloch framing. But you’re right that another very relevant angle on this is to consider the move to multiplayer or ‘public goods’ games. I think you could argue that network structure is still relevant to those systems too (see e.g. this article) - as far as I can make out the consensus seems to be that the effects of the kinds of things mentioned in this post are weaker in such systems, but not non-existent. And the fact that they matter at all is important to avoiding the pure pessimism involved in the classic Moloch framing.
Nice post! However, I think you’re missing a key quantity—the scale of interaction. If the benefits of cooperation are global (as they are in not building AI that kills everyone), assortative interactions do not solve the problem (there is no partner choice, spatial structure, etc. if the scale of effects of the behaviour of relevance are global). In this case some form of punishment, policing, reward, bargains outside the focal game etc. is needed to modify the payoff matrix to get cooperation.
Thanks—and that’s a completely fair point. My focus here has been on systems of pairwise interactions (prisoner’s dilemma) as that seems closer to the original Moloch framing. But you’re right that another very relevant angle on this is to consider the move to multiplayer or ‘public goods’ games. I think you could argue that network structure is still relevant to those systems too (see e.g. this article) - as far as I can make out the consensus seems to be that the effects of the kinds of things mentioned in this post are weaker in such systems, but not non-existent. And the fact that they matter at all is important to avoiding the pure pessimism involved in the classic Moloch framing.