I think one could view the abstract idea of “coordination as a strategy for AI risk” as itself a winner, and the potential participants of future coordination as the audience—the more people believe that coordination can actually work, the more likely coordination is to happen. I’m not sure how much this should be taken into account.
I think one could view the abstract idea of “coordination as a strategy for AI risk” as itself a winner, and the potential participants of future coordination as the audience—the more people believe that coordination can actually work, the more likely coordination is to happen. I’m not sure how much this should be taken into account.