In my experience, when people say “it’s worse for China to win the AI race than America”, their main concern is that Chinese control of the far future would lead to a much less valuable future than American control would, not that American control reduces P(AI takeover). E.g. see this comment.
FWIW, I have seen a decent amount of flip-flopping on this question. My current guess is that most of the time when people say this, they don’t mean either of those things but have some other reason for the belief, and choose the justification that they think will be most likely compelling to their interlocutor (like, I’ve had a bunch of instances of the same person telling me at different times that they were centrally concerned about China because it increased P(AI takeover) and then at a different point in time in a different social context that they were centrally concerned about Chinese values being less good by their lights if optimized).
FWIW, my enthusiasm for “make America more good at AI than China” type policies comes somewhat more from considerations like “a larger US advantage lets the US spend more of a lead on safety without needing international cooperation” than considerations like “a CCP-led corrigible ASI would lead to much worse outcomes than a USG-led corrigible ASI”. Though both are substantial factors for me and I’m fairly uncertain; I would not be surprised if my ordering here switched in 6 months.
FWIW, my view is that the badness is somewhat evenly split between increases to takeover risk and the far future being worse conditional on no misaligned AI takeover. (Maybe 2⁄5 increases to misaligned AI takeover risk and 3⁄5 far future being worse? It depend on what you mean though because China winning is also correlated with US/China being close which is also probably correlated with more racing and thus probably more misaligned AI takeover risk?)
the far future being worse conditional on no takeover
To clarify, by “takeover” here do you mean “misaligned AI takeover”? I.e. does your “no takeover” conditional include worlds where e.g. the CCP uses AI to takeover?
In my experience, when people say “it’s worse for China to win the AI race than America”, their main concern is that Chinese control of the far future would lead to a much less valuable future than American control would, not that American control reduces P(AI takeover). E.g. see this comment.
FWIW, I have seen a decent amount of flip-flopping on this question. My current guess is that most of the time when people say this, they don’t mean either of those things but have some other reason for the belief, and choose the justification that they think will be most likely compelling to their interlocutor (like, I’ve had a bunch of instances of the same person telling me at different times that they were centrally concerned about China because it increased P(AI takeover) and then at a different point in time in a different social context that they were centrally concerned about Chinese values being less good by their lights if optimized).
It really depends on what you mean by “most of the time when people say this”. I don’t think my experience matches yours.
FWIW, my enthusiasm for “make America more good at AI than China” type policies comes somewhat more from considerations like “a larger US advantage lets the US spend more of a lead on safety without needing international cooperation” than considerations like “a CCP-led corrigible ASI would lead to much worse outcomes than a USG-led corrigible ASI”. Though both are substantial factors for me and I’m fairly uncertain; I would not be surprised if my ordering here switched in 6 months.
FWIW, my view is that the badness is somewhat evenly split between increases to takeover risk and the far future being worse conditional on no misaligned AI takeover. (Maybe 2⁄5 increases to misaligned AI takeover risk and 3⁄5 far future being worse? It depend on what you mean though because China winning is also correlated with US/China being close which is also probably correlated with more racing and thus probably more misaligned AI takeover risk?)
To clarify, by “takeover” here do you mean “misaligned AI takeover”? I.e. does your “no takeover” conditional include worlds where e.g. the CCP uses AI to takeover?
Yes, I just meant “misaligned ai takeover”. Edited to clarify.