I can think of plenty of reasons, of varying levels of sensibility.
Arguments
Some people believe that (a) controlled on-paradigm ASI is possible, but that (b) it would require spending some nontrivial amount of resources/time on alignment/control research[1], and that (c) the US AGI labs are much more likely to do it than the Chinese ones. Therefore, the US winning is less likely to lead to omnicide.
I think it’s not unreasonable to believe (c), so if you believe (a) and (b), as many people do, the conclusion checks out. I assign low (but nonzero) probability to (a), though.
Even if the Chinese labs can keep ASI aligned/under control, some people are scared of being enslaved by the CCP, and think that the USG becoming god is going to be better for them.[2] This probably includes people who profess to only care about the nobody-should-build-it thing: they un/semi-consciously track the S-risk possibility, and it’s awful-feeling enough to affect their thinking even if they assign it low probability.
I think that’s a legitimate worry; S-risks are massively worse than X-risks. But I don’t expect the USG’s apotheosis to look pretty either, especially not under the current administration, and same for the apotheosis of most AGI labs, so the point is mostly moot.
I guess Anthropic or maybe DeepMind could choose non-awful results? So sure, if the current paradigm can lead to controlled ASI, and the USG stays asleep, and Anthropic/DM are the favorites to win, “make China lose” has some sense.
Variant on the above scenarios, but which does involve an international pause, with some coordination to only develop ASI once it can be kept under control. This doesn’t necessarily guarantee that the ASI, once developed, will be eudaimonic, so “who gets to ASI first/has more say on ASI” may matter; GOTO (2).
The AI-Risk advocates may feel that they have more influence on the leadership of the US labs. For US-based advocates, this is almost certainly correct. If that leadership can be convinced to pause, this buys us as much time as it’d take for the runners-up to catch up. Thus, the further behind China is, the more time we can buy in this hypothetical.
In addition, if China is way behind, it’s more likely that the US AGI labs would agree to stop, since more time to work would increase the chances of success of [whatever we want the pause for, e. g. doing alignment research or trying to cause an international ban].
Same as (4), but for governments. Perhaps the USG is easier to influence into arguing for an international pause. If so, both (a) the USG is more likely to do this if it feels that it’s comfortably ahead of China rather than nose-to-nose, (b) China is more likely to agree to an international ban if the USG is speaking from a position of power and is ahead on AI than if it’s behind/nose-to-nose. (Both because the ban would be favorable to China geopolitically, and because the X-risk arguments would sound more convincing if they don’t look like motivated reasoning/bullshit you’re inventing to convince China to abandon a technology that gives it geopolitical lead on the US.)
Some less sensible/well-thought-out variants of the above, e. g.:
Having the illusion of having more control over the US labs/government.
Semi/un-consciously feeling that it’d be better if your nation ends the world than if the Chinese do it.
Semi/un-consciously feeling that it’d be better if your nation is more powerful/ahead of a foreign one, independent of any X-risk considerations.
Suppose you think the current paradigm doesn’t scale to ASI, or that we’ll succeed in internationally banning ASI research. The amount of compute at a nation’s disposal is likely to still be increasingly important in the coming future (just because it’d allow to better harness the existing AI technology, for military and economic ends). Thus, constricting China’s access is likely to be better for the US as well.
This has nothing to do with X-risks though, it’s prosaic natsec stuff.
tl;dr:
If we get alignment by default, some US-based actors winning may be more likely to lead to a good future than the Chinese actors winning.
If on-paradigm ASI alignment is possible given some low-but-nontrivial resource expenditure, the US labs may be more likely to spend the resources on it than the Chinese ones.
US AI Safety advocates may have more control over the US AGI labs and/or the USG. The more powerful those are relative to the foreign AGI researchers, the more leverage that influence provides, including for slowing down/banning AGI research.
US AI Safety advocates may be at least partly motivated by dumb instincts for “my nation good, their nation bad”, and therefore want the US to win even if it’s winning a race-to-suicide.
Keeping a compute lead may be geopolitically important even in non-ASI worlds.
E. g., Ryan Greenblatt thinks that spending just 5% more resources than is myopically commercially expedient would drive the risk down to 50%. AI 2027 also assumes something like this.
E. g., Ryan Greenblatt thinks that spending 5% more resources than is myopically commercially expedient would be enough. AI 2027 also assumes something like this.
TBC, my view isn’t that this is sufficient for avoiding takeover risk, it is that this suffices for “you [to] have a reasonable chance of avoiding AI takeover (maybe 50% chance of misaligned AI takeover?)”.
(You seem to understand that this is my perspective and I think this is also mostly clear from the context in the box, but I wanted to clarify this given the footnote might be read in isolation or misinterpreted.)
I can think of plenty of reasons, of varying levels of sensibility.
Arguments
Some people believe that (a) controlled on-paradigm ASI is possible, but that (b) it would require spending some nontrivial amount of resources/time on alignment/control research[1], and that (c) the US AGI labs are much more likely to do it than the Chinese ones. Therefore, the US winning is less likely to lead to omnicide.
I think it’s not unreasonable to believe (c), so if you believe (a) and (b), as many people do, the conclusion checks out. I assign low (but nonzero) probability to (a), though.
Even if the Chinese labs can keep ASI aligned/under control, some people are scared of being enslaved by the CCP, and think that the USG becoming god is going to be better for them.[2] This probably includes people who profess to only care about the nobody-should-build-it thing: they un/semi-consciously track the S-risk possibility, and it’s awful-feeling enough to affect their thinking even if they assign it low probability.
I think that’s a legitimate worry; S-risks are massively worse than X-risks. But I don’t expect the USG’s apotheosis to look pretty either, especially not under the current administration, and same for the apotheosis of most AGI labs, so the point is mostly moot.
I guess Anthropic or maybe DeepMind could choose non-awful results? So sure, if the current paradigm can lead to controlled ASI, and the USG stays asleep, and Anthropic/DM are the favorites to win, “make China lose” has some sense.
Variant on the above scenarios, but which does involve an international pause, with some coordination to only develop ASI once it can be kept under control. This doesn’t necessarily guarantee that the ASI, once developed, will be eudaimonic, so “who gets to ASI first/has more say on ASI” may matter; GOTO (2).
The AI-Risk advocates may feel that they have more influence on the leadership of the US labs. For US-based advocates, this is almost certainly correct. If that leadership can be convinced to pause, this buys us as much time as it’d take for the runners-up to catch up. Thus, the further behind China is, the more time we can buy in this hypothetical.
In addition, if China is way behind, it’s more likely that the US AGI labs would agree to stop, since more time to work would increase the chances of success of [whatever we want the pause for, e. g. doing alignment research or trying to cause an international ban].
Same as (4), but for governments. Perhaps the USG is easier to influence into arguing for an international pause. If so, both (a) the USG is more likely to do this if it feels that it’s comfortably ahead of China rather than nose-to-nose, (b) China is more likely to agree to an international ban if the USG is speaking from a position of power and is ahead on AI than if it’s behind/nose-to-nose. (Both because the ban would be favorable to China geopolitically, and because the X-risk arguments would sound more convincing if they don’t look like motivated reasoning/bullshit you’re inventing to convince China to abandon a technology that gives it geopolitical lead on the US.)
Some less sensible/well-thought-out variants of the above, e. g.:
Having the illusion of having more control over the US labs/government.
Semi/un-consciously feeling that it’d be better if your nation ends the world than if the Chinese do it.
Semi/un-consciously feeling that it’d be better if your nation is more powerful/ahead of a foreign one, independent of any X-risk considerations.
Suppose you think the current paradigm doesn’t scale to ASI, or that we’ll succeed in internationally banning ASI research. The amount of compute at a nation’s disposal is likely to still be increasingly important in the coming future (just because it’d allow to better harness the existing AI technology, for military and economic ends). Thus, constricting China’s access is likely to be better for the US as well.
This has nothing to do with X-risks though, it’s prosaic natsec stuff.
tl;dr:
If we get alignment by default, some US-based actors winning may be more likely to lead to a good future than the Chinese actors winning.
If on-paradigm ASI alignment is possible given some low-but-nontrivial resource expenditure, the US labs may be more likely to spend the resources on it than the Chinese ones.
US AI Safety advocates may have more control over the US AGI labs and/or the USG. The more powerful those are relative to the foreign AGI researchers, the more leverage that influence provides, including for slowing down/banning AGI research.
US AI Safety advocates may be at least partly motivated by dumb instincts for “my nation good, their nation bad”, and therefore want the US to win even if it’s winning a race-to-suicide.
Keeping a compute lead may be geopolitically important even in non-ASI worlds.
E. g., Ryan Greenblatt thinks that spending just 5% more resources than is myopically commercially expedient would drive the risk down to 50%. AI 2027 also assumes something like this.
E. g., I think this is the position of Leopold Aschenbrenner.
TBC, my view isn’t that this is sufficient for avoiding takeover risk, it is that this suffices for “you [to] have a reasonable chance of avoiding AI takeover (maybe 50% chance of misaligned AI takeover?)”.
(You seem to understand that this is my perspective and I think this is also mostly clear from the context in the box, but I wanted to clarify this given the footnote might be read in isolation or misinterpreted.)
Edited for clarity.
I’m curious, what’s your estimate for how much resources it’d take to drive the risk down to 25%, 10%, 1%?