I don’t think China is willing to accept yielding. I can’t think of any reason that they would.
This is totally a shower thought, and I don’t trust it, but what about a strategy of semi-cooperation? China has been contributing to open source models. Those models have been keeping up with, and catching up to the capabilities of the closed source models.
I wonder if mutual containment could come through having similar capabilities as we both learn from each other’s research. Then neither side has a gross advantage. Maybe it doesn’t have to be zero-sum.
Yeah I think an alternative to “mutual yield” would be “only develop powerful AI as an international collaboration.” So, yield in terms of building it unilaterally, but not necessarily on the whole
Semi-cooperation is one way for both sides to learn from each other—but so is poor infosec or even outright espionage. If both countries are leaking or spying enough, that might create a kind of uneasy balance (and transparency), even without formal agreements. It’s not exactly stable, but it could prevent either side from gaining a decisive lead.
In fact, sufficiently bad infosec might even make certain forms of cooperation and mutual verification easier. For instance, if both countries are considering setting up trusted data centers to make verifiable claims about AGI development, the fact that espionage already permeates much of the AI supply chain could paradoxically lower the bar for trust. In a world where perfect secrecy is already compromised, agreeing to “good enough” transparency might become more feasible.
I don’t think China is willing to accept yielding. I can’t think of any reason that they would.
This is totally a shower thought, and I don’t trust it, but what about a strategy of semi-cooperation? China has been contributing to open source models. Those models have been keeping up with, and catching up to the capabilities of the closed source models.
I wonder if mutual containment could come through having similar capabilities as we both learn from each other’s research. Then neither side has a gross advantage. Maybe it doesn’t have to be zero-sum.
.
Yeah I think an alternative to “mutual yield” would be “only develop powerful AI as an international collaboration.” So, yield in terms of building it unilaterally, but not necessarily on the whole
Semi-cooperation is one way for both sides to learn from each other—but so is poor infosec or even outright espionage. If both countries are leaking or spying enough, that might create a kind of uneasy balance (and transparency), even without formal agreements. It’s not exactly stable, but it could prevent either side from gaining a decisive lead.
In fact, sufficiently bad infosec might even make certain forms of cooperation and mutual verification easier. For instance, if both countries are considering setting up trusted data centers to make verifiable claims about AGI development, the fact that espionage already permeates much of the AI supply chain could paradoxically lower the bar for trust. In a world where perfect secrecy is already compromised, agreeing to “good enough” transparency might become more feasible.