I’m struggling to find the meat in this post. The idea that winning a fight for control can actually mean losing, because one’s leadership proves worse for the group than if one’s rival had won strikes me as one of the most basic properties of politics. The fact that the questions “Who would be better for national security”? vs “who will ensure I, and not my neighbor, will get more of the pie?” are quite distinct is something anyone who has ever voted in a national election ought to have considered. You state that “most power contests are not like this” (i.e. about shared outcomes) but that’s just plainly wrong, it should be obvious to anyone existing in a human group that “what’s good for the group” (including who should get what, to incentivize defense of, or other productive contributions to, the group) is usually the crux, otherwise there would be no point in political debate. So what am I missing?
Ironically, you then blithely state that AI risk is a special case where power politics ARE purely about “us” all being in the same boat, completely ignoring the concern that some accelerationists really might eventually try to run away with the whole game (I have been beating the drum about asymmetric AI risk for some time, so this is personally frustrating). Even if these concerns are secondary to wholly shared risk, it seems weird to (incorrectly) describe “most power politics” as being about purely asymmetric outcomes and then not account for them at all in your treatment of AI risk.
I’m struggling to find the meat in this post. The idea that winning a fight for control can actually mean losing, because one’s leadership proves worse for the group than if one’s rival had won strikes me as one of the most basic properties of politics. The fact that the questions “Who would be better for national security”? vs “who will ensure I, and not my neighbor, will get more of the pie?” are quite distinct is something anyone who has ever voted in a national election ought to have considered. You state that “most power contests are not like this” (i.e. about shared outcomes) but that’s just plainly wrong, it should be obvious to anyone existing in a human group that “what’s good for the group” (including who should get what, to incentivize defense of, or other productive contributions to, the group) is usually the crux, otherwise there would be no point in political debate. So what am I missing?
Ironically, you then blithely state that AI risk is a special case where power politics ARE purely about “us” all being in the same boat, completely ignoring the concern that some accelerationists really might eventually try to run away with the whole game (I have been beating the drum about asymmetric AI risk for some time, so this is personally frustrating). Even if these concerns are secondary to wholly shared risk, it seems weird to (incorrectly) describe “most power politics” as being about purely asymmetric outcomes and then not account for them at all in your treatment of AI risk.