There are some additional reasons, beyond the question of which values would be embedded in the AGI systems, to not prefer AGI development in China, that I haven’t seen mentioned here:
Systemic opacity, state-driven censorship, and state control of the media means AGI development under direct or indirect CCP control would probably be less transparent than in the US, and the world may be less likely to learn about warning shots, wrongheaded decisions, reckless behaviour, etc. True, there was the Manhattan Project, but that was quite long ago; recent examples like the CCP’s suppression of information related to the origins of COVID feel more salient and relevant.
There are more checks and balances in the US than in China, which you may think could e.g., positively influence regulation; or if there’s a government project, help incentivise responsible decisions there; or if someone attempts to concentrate power using some early AGI, stop that from happening. E.g., in the West voters have some degree of influence over the government, there’s the free press, the judiciary, an ecosystem of nonprofits, and so on. In China, the CCP doesn’t have total control, but much more so than Western governments do.
I think it’s also very rare that people are actually faced with a choice between “AGI in the US” versus “AGI in China”. A more accurate but still flawed model of the choice people are sometimes faced with is “AGI in the US” versus “AGI in the US and in China”, or even “AGI in the US, and in China 6-12 months later” versus “AGI in the US, and in China 3-6 months later”.
@Tomás B. There is also vastly less of an “AI safety community” in China—probably much less AI safety research in general, and much less of it, in percentage terms, is aimed at thinking ahead about superintelligent AI. (ie, more of China’s “AI safety research” is probably focused on things like reducing LLM hallucinations, making sure it doesn’t make politically incorrect statements, etc.)
Where are the chinese equivalents of the American and British AISI government departments? Organizations like METR, Epoch, Forethought, MIRI, et cetera?
Who are some notable Chinese intellectuals / academics / scientists (along the lines of Yoshua Bengio or Geoffrey Hinton) who have made any public statements about the danger of potential AI x-risks?
Have any chinese labs published “responsible scaling plans” or tiers of “AI Safety Levels” as detailed as those from OpenAI, Deepmind, or Anthropic? Or discussed how they’re planning to approach the challenge of aligning superintelligence?
Have workers at any Chinese AI lab resigned in protest of poor AI safety policies (like the various people who’ve left OpenAI over the years), or resisted the militarization of AI technology (like googlers protesting Project Maven, or microsoft employees protesting the IVAS HMD program)?
When people ask this question about the relative value of “US” vs “Chinese” AI, they often go straight for big-picture political questions about whether the leadership of China or the US is more morally righteous, less likely to abuse human rights, et cetera. Personally, in these debates, I do tend to favor the USA, although certainly both the US and China have many deep and extremely troubling flaws—both seem very far from the kind of responsible, competent, benevolent entity to whom I would like to entrust humanity’s future.
But before we even get to that question of “What would national leaders do with an aligned superintelligence, if they had one,” we must answer the question “Do this nation’s AI labs seem likely to produce an aligned superintelligence?” Again, the USA leaves a lot to be desired here. But oftentimes China seems to not even be thinking about the problem. This is a huge issue from both a technical perspective (if you don’t have any kind of plan for how you’re going to align superintelligence, perhaps you are less likely to align superintelligence), AND from a governance perspective (if policymakers just think of AI as a tool for boosting economic / military progress and haven’t thought about the many unique implications of superintelligence, then they will probably make worse decisions during an extremely important period in history).
Now, indeed—has Trump thought about superintelligence? Obviously not—just trying to understand intelligent humans must be difficult for him. But the USA in general seems much more full of people who “take AI seriously” in one way or another—sillicon-valley CEOs, pentagon advisers, billionare philanthropists, et cetera. Even in today’s embarassing administration, there are very high-ranking people (like Elon Musk and J. D. Vance) who seem at least aware of the transformative potential of AI. China’s government is more opaque, so maybe they’re thinking about this stuff too. But all public evidence suggests to me that they’re kinda just blindly racing forward, trying to match and surpass the West on capabilities, without giving much thought as to where this technology might ultimately go.
The four questions you ask are excellent, since they get away from general differences of culture or political system, and address the processes that are actually producing Chinese AI.
The relevance and quality of Chinese technical research for frontier AI safety has increased substantially, with growing work on frontier issues such as LLM unlearning, misuse risks of AI in biology and chemistry, and evaluating “power-seeking” and “self-awareness” risks of LLMs.
There have been nearly 15 Chinese technical papers on frontier AI safety per month on average over the past 6 months. The report identifies 11 key research groups who have written a substantial portion of these papers.
China’s decision to sign the Bletchley Declaration, issue a joint statement on AI governance with France, and pursue an intergovernmental AI dialogue with the US indicates a growing convergence of views on AI safety among major powers compared to early 2023.
Since 2022, 8 Track 1.5 or 2 dialogues focused on AI have taken place between China and Western countries, with 2 focused on frontier AI safety and governance.
Chinese national policy and leadership show growing interest in developing large models while balancing risk prevention.
Unofficial expert drafts of China’s forthcoming national AI law contain provisions on AI safety, such as specialized oversight for foundation models and stipulating value alignment of AGI.
Local governments in China’s 3 biggest AI hubs have issued policies on AGI or large models, primarily aimed at accelerating development while also including provisions on topics such as international cooperation, ethics, and testing and evaluation.
Several influential industry associations established projects or committees to research AI safety and security problems, but their focus is primarily on content and data security rather than frontier AI safety.
In recent months, Chinese experts have discussed several focused AI safety topics, including “red lines” that AI must not cross to avoid “existential risks,” minimum funding levels for AI safety research, and AI’s impact on biosecurity.
So clearly there is a discourse about AI safety there, that does sometimes extend even as far as the risk of extinction. It’s nowhere near as prominent or dramatic as it has been in the USA, but it’s there.
Speaking to post-labor futures, I feel that CCP AGI would be more likely to redistribute resources in an equitable manner when compared to the US.
Over the last 50 years or so, productivity growth in the US has translated to the ultra-wealthy growing in wealth while the wages for the working class has stagnated. Coupled with oligarchy growing in US, I don’t expect the USG to have the interest of the people first and foremost. If USG has AGI, I expect that the trend of rising inequality will continue: billionaires will reap the benefits and the rest of people will be economically powerless… at best surviving on UBI.
As for China, I think that less corporate interests and power-seeking pressures have plagued the CCP. I don’t know much about Xi and his administration but I assume that they are less corrupt and more caring about their people. China has their capitalism under control and I believe that are more likely to create a fully automated luxury communism utopia rather than a hyper-capitalist hell. As for lacking American free-speech, I think equitable resource distribution is at least 100x more important.
As long as the US stays staunchly capitalist, I fear they will not be able/willing to redistribute AGI abundance.
I think when it comes to the question of “who’s more likely to use AGI build fully-automated luxury communism”, there are actually a lot of competing considerations on both sides, and it’s not nearly as clear as you make it out.
Xi Jinping, the leader of the CCP, seems like kind of a mixed bag:
On the one hand, I agree with you that Xi does seem to be a true believer in some elements of the core socialist dream of equality, common dignity for everyone, and improved lives for ordinary people. Hence his “Common Prosperity” campaign to reduce inequality, anti-corruption drives, bragging (in an exaggerated but still-commendable way) about having eliminated extreme poverty, etc. Having a fundamentally humanist outlook and not being an obvious psychopath / destructive idiot / etc is of course very important, and always reflects well on people who meet that description.
On the other hand, as others have mentioned, the intense repression of Hong Kong, Tibet, and most of all Xinjiang, does not bode super well if we are thinking “who seems like a benevolent guy in which to entrust the future of human civilization”. In terms of scale and intensity, the extent of the anti-Uyghur police state in Xinjiang seems beyond anything that the USA has done to their own citizens.
More broadly, China generally seems to have less respect for individual freedoms, and instead positions themselves as governing for the benefit of the majority. (Much harsher covid lockdowns are an example of this, as is their reduced freedom of speech, fewer regulations protecting the environment or private property, etc. Arguably benefits have included things like faster pace of development, fewer covid deaths, etc.) This effect could cut both ways—respect for individual freedoms is pretty important, but governing for the benefit of the majority is by definition gonna benefit most ordinary people if you do it well.
Your comment kind of assumes that china = socialist and socialism = more willingness to “redistribute resources in an equitable manner”. But Xi has taken pains to explain that he is very opposed to what he calls “welfarism”—in his view, socialism doesn’t involve China handing out subsidized healthcare, retirement benefits, etc to a “lazy” population, like we do here in the decadent west. This attitude might change in the future if AGI generates tons of wealth (right now they are probably afraid that Chinese versions of social security and medicare might blow a hole in the government budget, just like it is currently blowing a hole in the US budget)...
...But it also might not! Xi generally seems weirdly unconcerned with the day-to-day suffering of his people, not just in a “human rights abuses against minorities” sense, but also in the sense that he is always banning “decadent” forms of entertainment like videogames, boy bands, etc, telling young people to suck it up and “eat bitterness” because hardship builds character, etc.
China has been very reluctant to do western-style consumer stimulus to revive their economy during recessions—instead of helping consumers afford more household goods and luxuries, Xi usually wants to stimulate the economy by investing in instruments of national military/industrial might, subsidising strategic areas like nuclear power, aerospace, quantum computing, etc.
Meanwhile on the American side, I’d probably agree with you that the morality of America’s current national leaders strikes me as… leaving much to be desired, to put it lightly. Personally, I would give Trump maybe only 1 or 1.5 points out of three on my earlier criteria of “fundamentally humanist outlook + not a psychopath + not a destructive idiot”.
But America has much more rule of law and more checks-and-balances than China (even as Trump is trying to degrade those things), so the future of AGI would perhaps not as much be solely in the hands of the one guy at the top.
And also, more importantly IMO, America is a democracy, which means a lot can change every four years and the population will have more of an ability to give feedback to the government during the early years of AGI takeoff.
In particular, beyond just swapping out the current political party for leaders from the other political party, I think that if ordinary people’s economic position changed very dramatically due to the introduction of AGI, American politics would probably also shift very rapidly. Under those conditions, it actually seems pretty plausible that America could switch ideologies to some kind of Georgist / socialist UBI-state that just pays lip service to the idea of capitalism—kinda like how China after Mao switched to a much more capitalistic system that just pays lip service (“socialism with chinese characteristics”) to many of the badly failed policies of Maoism. So I think the odds of “the US stays staunchly capitalist” are lower than the odds of “China stays staunchly whatever-it-is-currently”, just because America will get a couple opportunities to radically change direction between now and whenever the long-term future of civilization gets locked in, wheras China might not.
In contrast to our current national leaders, some of the leaders of top US AI labs strike me as having pretty awesome politics, honestly. Sam Altman, despite his numerous other flaws, is a Georgist and a longtime supporter of UBI, and explicitly wants to use AI to achieve a kind of socialist utopia. Dario Amodei’s vision for the future of AI is similarly utopian and benevolent, going into great detail about how he hopes AI will help cure or prevent most illness (including mental illness), help people be the best versions of themselves, assist the economic development of poor countries, help solve international coordination problems to lead to greater peace on earth, etc. Demis Hassabis hasn’t said as much (as far as I’m aware), but his team has the best track record of using AI to create real-world altruistic benefits for scientific and medical progress, such as by creating Alphafold 3. Maybe this is all mere posturing from cynical billionares. But if so, the posturing is quite detailed and nuanced, indicating that they’ve thought seriously about these views for a long time. By contrast, there is nothing like this coming out of Deepseek (which is literally a wall-street style hedge fund combined with an AI lab!) or other Chinese AI labs.
Finally, I would note that you are basically raising concerns about humanity’s “gradual disempowerment” through misaligned economic and political processes, AI concentration-of-power risks where a small cadre of capricious national leaders and insiders gets to decide the fate of humanity, etc. Per my other comment in this thread, these types of AI safety concerns seem like right now they are being discussed almost exclusively in the West, and not in China. (This particular gradual-disempowerment stuff seems even MORE lopsided in favor of the West, even compared to superintelligence / existential risk concerns in general, which are already more lopsided in favor of the West than the entire category of AI safety overall.) So… maybe give some weight to the idea that if you are worried about a big problem, the problem might be more likely to get solved in the country where people are talking about the problem!
Systemic opacity, state-driven censorship, and state control of the media means AGI development under direct or indirect CCP control would probably be less transparent than in the US, and the world may be less likely to learn about warning shots, wrongheaded decisions, reckless behaviour, etc.
That’s screened off by actual evidence, which is, top labs don’t publish much no matter where they are, so I’d only agree with “equally opaque”.
There are some additional reasons, beyond the question of which values would be embedded in the AGI systems, to not prefer AGI development in China, that I haven’t seen mentioned here:
Systemic opacity, state-driven censorship, and state control of the media means AGI development under direct or indirect CCP control would probably be less transparent than in the US, and the world may be less likely to learn about warning shots, wrongheaded decisions, reckless behaviour, etc. True, there was the Manhattan Project, but that was quite long ago; recent examples like the CCP’s suppression of information related to the origins of COVID feel more salient and relevant.
There are more checks and balances in the US than in China, which you may think could e.g., positively influence regulation; or if there’s a government project, help incentivise responsible decisions there; or if someone attempts to concentrate power using some early AGI, stop that from happening. E.g., in the West voters have some degree of influence over the government, there’s the free press, the judiciary, an ecosystem of nonprofits, and so on. In China, the CCP doesn’t have total control, but much more so than Western governments do.
I think it’s also very rare that people are actually faced with a choice between “AGI in the US” versus “AGI in China”. A more accurate but still flawed model of the choice people are sometimes faced with is “AGI in the US” versus “AGI in the US and in China”, or even “AGI in the US, and in China 6-12 months later” versus “AGI in the US, and in China 3-6 months later”.
@Tomás B. There is also vastly less of an “AI safety community” in China—probably much less AI safety research in general, and much less of it, in percentage terms, is aimed at thinking ahead about superintelligent AI. (ie, more of China’s “AI safety research” is probably focused on things like reducing LLM hallucinations, making sure it doesn’t make politically incorrect statements, etc.)
Where are the chinese equivalents of the American and British AISI government departments? Organizations like METR, Epoch, Forethought, MIRI, et cetera?
Who are some notable Chinese intellectuals / academics / scientists (along the lines of Yoshua Bengio or Geoffrey Hinton) who have made any public statements about the danger of potential AI x-risks?
Have any chinese labs published “responsible scaling plans” or tiers of “AI Safety Levels” as detailed as those from OpenAI, Deepmind, or Anthropic? Or discussed how they’re planning to approach the challenge of aligning superintelligence?
Have workers at any Chinese AI lab resigned in protest of poor AI safety policies (like the various people who’ve left OpenAI over the years), or resisted the militarization of AI technology (like googlers protesting Project Maven, or microsoft employees protesting the IVAS HMD program)?
When people ask this question about the relative value of “US” vs “Chinese” AI, they often go straight for big-picture political questions about whether the leadership of China or the US is more morally righteous, less likely to abuse human rights, et cetera. Personally, in these debates, I do tend to favor the USA, although certainly both the US and China have many deep and extremely troubling flaws—both seem very far from the kind of responsible, competent, benevolent entity to whom I would like to entrust humanity’s future.
But before we even get to that question of “What would national leaders do with an aligned superintelligence, if they had one,” we must answer the question “Do this nation’s AI labs seem likely to produce an aligned superintelligence?” Again, the USA leaves a lot to be desired here. But oftentimes China seems to not even be thinking about the problem. This is a huge issue from both a technical perspective (if you don’t have any kind of plan for how you’re going to align superintelligence, perhaps you are less likely to align superintelligence), AND from a governance perspective (if policymakers just think of AI as a tool for boosting economic / military progress and haven’t thought about the many unique implications of superintelligence, then they will probably make worse decisions during an extremely important period in history).
Now, indeed—has Trump thought about superintelligence? Obviously not—just trying to understand intelligent humans must be difficult for him. But the USA in general seems much more full of people who “take AI seriously” in one way or another—sillicon-valley CEOs, pentagon advisers, billionare philanthropists, et cetera. Even in today’s embarassing administration, there are very high-ranking people (like Elon Musk and J. D. Vance) who seem at least aware of the transformative potential of AI. China’s government is more opaque, so maybe they’re thinking about this stuff too. But all public evidence suggests to me that they’re kinda just blindly racing forward, trying to match and surpass the West on capabilities, without giving much thought as to where this technology might ultimately go.
The four questions you ask are excellent, since they get away from general differences of culture or political system, and address the processes that are actually producing Chinese AI.
The best reference I have so far is a May 2024 report from Concordia AI on “The State of AI Safety in China”. I haven’t even gone through it yet, but let me reproduce the executive summary here:
So clearly there is a discourse about AI safety there, that does sometimes extend even as far as the risk of extinction. It’s nowhere near as prominent or dramatic as it has been in the USA, but it’s there.
Speaking to post-labor futures, I feel that CCP AGI would be more likely to redistribute resources in an equitable manner when compared to the US.
Over the last 50 years or so, productivity growth in the US has translated to the ultra-wealthy growing in wealth while the wages for the working class has stagnated. Coupled with oligarchy growing in US, I don’t expect the USG to have the interest of the people first and foremost. If USG has AGI, I expect that the trend of rising inequality will continue: billionaires will reap the benefits and the rest of people will be economically powerless… at best surviving on UBI.
As for China, I think that less corporate interests and power-seeking pressures have plagued the CCP. I don’t know much about Xi and his administration but I assume that they are less corrupt and more caring about their people. China has their capitalism under control and I believe that are more likely to create a fully automated luxury communism utopia rather than a hyper-capitalist hell. As for lacking American free-speech, I think equitable resource distribution is at least 100x more important.
As long as the US stays staunchly capitalist, I fear they will not be able/willing to redistribute AGI abundance.
I think when it comes to the question of “who’s more likely to use AGI build fully-automated luxury communism”, there are actually a lot of competing considerations on both sides, and it’s not nearly as clear as you make it out.
Xi Jinping, the leader of the CCP, seems like kind of a mixed bag:
On the one hand, I agree with you that Xi does seem to be a true believer in some elements of the core socialist dream of equality, common dignity for everyone, and improved lives for ordinary people. Hence his “Common Prosperity” campaign to reduce inequality, anti-corruption drives, bragging (in an exaggerated but still-commendable way) about having eliminated extreme poverty, etc. Having a fundamentally humanist outlook and not being an obvious psychopath / destructive idiot / etc is of course very important, and always reflects well on people who meet that description.
On the other hand, as others have mentioned, the intense repression of Hong Kong, Tibet, and most of all Xinjiang, does not bode super well if we are thinking “who seems like a benevolent guy in which to entrust the future of human civilization”. In terms of scale and intensity, the extent of the anti-Uyghur police state in Xinjiang seems beyond anything that the USA has done to their own citizens.
More broadly, China generally seems to have less respect for individual freedoms, and instead positions themselves as governing for the benefit of the majority. (Much harsher covid lockdowns are an example of this, as is their reduced freedom of speech, fewer regulations protecting the environment or private property, etc. Arguably benefits have included things like faster pace of development, fewer covid deaths, etc.) This effect could cut both ways—respect for individual freedoms is pretty important, but governing for the benefit of the majority is by definition gonna benefit most ordinary people if you do it well.
Your comment kind of assumes that china = socialist and socialism = more willingness to “redistribute resources in an equitable manner”. But Xi has taken pains to explain that he is very opposed to what he calls “welfarism”—in his view, socialism doesn’t involve China handing out subsidized healthcare, retirement benefits, etc to a “lazy” population, like we do here in the decadent west. This attitude might change in the future if AGI generates tons of wealth (right now they are probably afraid that Chinese versions of social security and medicare might blow a hole in the government budget, just like it is currently blowing a hole in the US budget)...
...But it also might not! Xi generally seems weirdly unconcerned with the day-to-day suffering of his people, not just in a “human rights abuses against minorities” sense, but also in the sense that he is always banning “decadent” forms of entertainment like videogames, boy bands, etc, telling young people to suck it up and “eat bitterness” because hardship builds character, etc.
China has been very reluctant to do western-style consumer stimulus to revive their economy during recessions—instead of helping consumers afford more household goods and luxuries, Xi usually wants to stimulate the economy by investing in instruments of national military/industrial might, subsidising strategic areas like nuclear power, aerospace, quantum computing, etc.
Meanwhile on the American side, I’d probably agree with you that the morality of America’s current national leaders strikes me as… leaving much to be desired, to put it lightly. Personally, I would give Trump maybe only 1 or 1.5 points out of three on my earlier criteria of “fundamentally humanist outlook + not a psychopath + not a destructive idiot”.
But America has much more rule of law and more checks-and-balances than China (even as Trump is trying to degrade those things), so the future of AGI would perhaps not as much be solely in the hands of the one guy at the top.
And also, more importantly IMO, America is a democracy, which means a lot can change every four years and the population will have more of an ability to give feedback to the government during the early years of AGI takeoff.
In particular, beyond just swapping out the current political party for leaders from the other political party, I think that if ordinary people’s economic position changed very dramatically due to the introduction of AGI, American politics would probably also shift very rapidly. Under those conditions, it actually seems pretty plausible that America could switch ideologies to some kind of Georgist / socialist UBI-state that just pays lip service to the idea of capitalism—kinda like how China after Mao switched to a much more capitalistic system that just pays lip service (“socialism with chinese characteristics”) to many of the badly failed policies of Maoism. So I think the odds of “the US stays staunchly capitalist” are lower than the odds of “China stays staunchly whatever-it-is-currently”, just because America will get a couple opportunities to radically change direction between now and whenever the long-term future of civilization gets locked in, wheras China might not.
In contrast to our current national leaders, some of the leaders of top US AI labs strike me as having pretty awesome politics, honestly. Sam Altman, despite his numerous other flaws, is a Georgist and a longtime supporter of UBI, and explicitly wants to use AI to achieve a kind of socialist utopia. Dario Amodei’s vision for the future of AI is similarly utopian and benevolent, going into great detail about how he hopes AI will help cure or prevent most illness (including mental illness), help people be the best versions of themselves, assist the economic development of poor countries, help solve international coordination problems to lead to greater peace on earth, etc. Demis Hassabis hasn’t said as much (as far as I’m aware), but his team has the best track record of using AI to create real-world altruistic benefits for scientific and medical progress, such as by creating Alphafold 3. Maybe this is all mere posturing from cynical billionares. But if so, the posturing is quite detailed and nuanced, indicating that they’ve thought seriously about these views for a long time. By contrast, there is nothing like this coming out of Deepseek (which is literally a wall-street style hedge fund combined with an AI lab!) or other Chinese AI labs.
Finally, I would note that you are basically raising concerns about humanity’s “gradual disempowerment” through misaligned economic and political processes, AI concentration-of-power risks where a small cadre of capricious national leaders and insiders gets to decide the fate of humanity, etc. Per my other comment in this thread, these types of AI safety concerns seem like right now they are being discussed almost exclusively in the West, and not in China. (This particular gradual-disempowerment stuff seems even MORE lopsided in favor of the West, even compared to superintelligence / existential risk concerns in general, which are already more lopsided in favor of the West than the entire category of AI safety overall.) So… maybe give some weight to the idea that if you are worried about a big problem, the problem might be more likely to get solved in the country where people are talking about the problem!
That’s screened off by actual evidence, which is, top labs don’t publish much no matter where they are, so I’d only agree with “equally opaque”.