Eric Hoel has written about this and his(as well as mine) impression of the reigning attitude of the Party arguably is pro-coordination. This might be strange as I take it from internally knowing that there was discussion about what to do if an AGI seemed likely—one of the answer was to immediately launch nuclear attack and make sure no one(and no machine) survives.
It’s a very “brute” attitude toward alignment but there’s a clear preference for humanity in charge, and the idea of machines taking the “future” from the Chinese people is seen as unacceptable.
Conversely, without AI anywhere in the world, the Party seems to think they will “win.” So the game dynamics encourage them to agree to styming AI development everywhere, even if they do something more low-key in secret.
What’s intriguing to me is that while the OP disagrees on a lot of possibilities for agreement, the central dynamics of competition and “brute force” remain, as well as the higher technological savvy of the Party.
Eric Hoel has written about this and his(as well as mine) impression of the reigning attitude of the Party arguably is pro-coordination. This might be strange as I take it from internally knowing that there was discussion about what to do if an AGI seemed likely—one of the answer was to immediately launch nuclear attack and make sure no one(and no machine) survives.
It’s a very “brute” attitude toward alignment but there’s a clear preference for humanity in charge, and the idea of machines taking the “future” from the Chinese people is seen as unacceptable.
Conversely, without AI anywhere in the world, the Party seems to think they will “win.” So the game dynamics encourage them to agree to styming AI development everywhere, even if they do something more low-key in secret.
What’s intriguing to me is that while the OP disagrees on a lot of possibilities for agreement, the central dynamics of competition and “brute force” remain, as well as the higher technological savvy of the Party.