@Tomás B. There is also vastly less of an “AI safety community” in China—probably much less AI safety research in general, and much less of it, in percentage terms, is aimed at thinking ahead about superintelligent AI. (ie, more of China’s “AI safety research” is probably focused on things like reducing LLM hallucinations, making sure it doesn’t make politically incorrect statements, etc.)
Where are the chinese equivalents of the American and British AISI government departments? Organizations like METR, Epoch, Forethought, MIRI, et cetera?
Who are some notable Chinese intellectuals / academics / scientists (along the lines of Yoshua Bengio or Geoffrey Hinton) who have made any public statements about the danger of potential AI x-risks?
Have any chinese labs published “responsible scaling plans” or tiers of “AI Safety Levels” as detailed as those from OpenAI, Deepmind, or Anthropic? Or discussed how they’re planning to approach the challenge of aligning superintelligence?
Have workers at any Chinese AI lab resigned in protest of poor AI safety policies (like the various people who’ve left OpenAI over the years), or resisted the militarization of AI technology (like googlers protesting Project Maven, or microsoft employees protesting the IVAS HMD program)?
When people ask this question about the relative value of “US” vs “Chinese” AI, they often go straight for big-picture political questions about whether the leadership of China or the US is more morally righteous, less likely to abuse human rights, et cetera. Personally, in these debates, I do tend to favor the USA, although certainly both the US and China have many deep and extremely troubling flaws—both seem very far from the kind of responsible, competent, benevolent entity to whom I would like to entrust humanity’s future.
But before we even get to that question of “What would national leaders do with an aligned superintelligence, if they had one,” we must answer the question “Do this nation’s AI labs seem likely to produce an aligned superintelligence?” Again, the USA leaves a lot to be desired here. But oftentimes China seems to not even be thinking about the problem. This is a huge issue from both a technical perspective (if you don’t have any kind of plan for how you’re going to align superintelligence, perhaps you are less likely to align superintelligence), AND from a governance perspective (if policymakers just think of AI as a tool for boosting economic / military progress and haven’t thought about the many unique implications of superintelligence, then they will probably make worse decisions during an extremely important period in history).
Now, indeed—has Trump thought about superintelligence? Obviously not—just trying to understand intelligent humans must be difficult for him. But the USA in general seems much more full of people who “take AI seriously” in one way or another—sillicon-valley CEOs, pentagon advisers, billionare philanthropists, et cetera. Even in today’s embarassing administration, there are very high-ranking people (like Elon Musk and J. D. Vance) who seem at least aware of the transformative potential of AI. China’s government is more opaque, so maybe they’re thinking about this stuff too. But all public evidence suggests to me that they’re kinda just blindly racing forward, trying to match and surpass the West on capabilities, without giving much thought as to where this technology might ultimately go.
The four questions you ask are excellent, since they get away from general differences of culture or political system, and address the processes that are actually producing Chinese AI.
The relevance and quality of Chinese technical research for frontier AI safety has increased substantially, with growing work on frontier issues such as LLM unlearning, misuse risks of AI in biology and chemistry, and evaluating “power-seeking” and “self-awareness” risks of LLMs.
There have been nearly 15 Chinese technical papers on frontier AI safety per month on average over the past 6 months. The report identifies 11 key research groups who have written a substantial portion of these papers.
China’s decision to sign the Bletchley Declaration, issue a joint statement on AI governance with France, and pursue an intergovernmental AI dialogue with the US indicates a growing convergence of views on AI safety among major powers compared to early 2023.
Since 2022, 8 Track 1.5 or 2 dialogues focused on AI have taken place between China and Western countries, with 2 focused on frontier AI safety and governance.
Chinese national policy and leadership show growing interest in developing large models while balancing risk prevention.
Unofficial expert drafts of China’s forthcoming national AI law contain provisions on AI safety, such as specialized oversight for foundation models and stipulating value alignment of AGI.
Local governments in China’s 3 biggest AI hubs have issued policies on AGI or large models, primarily aimed at accelerating development while also including provisions on topics such as international cooperation, ethics, and testing and evaluation.
Several influential industry associations established projects or committees to research AI safety and security problems, but their focus is primarily on content and data security rather than frontier AI safety.
In recent months, Chinese experts have discussed several focused AI safety topics, including “red lines” that AI must not cross to avoid “existential risks,” minimum funding levels for AI safety research, and AI’s impact on biosecurity.
So clearly there is a discourse about AI safety there, that does sometimes extend even as far as the risk of extinction. It’s nowhere near as prominent or dramatic as it has been in the USA, but it’s there.
@Tomás B. There is also vastly less of an “AI safety community” in China—probably much less AI safety research in general, and much less of it, in percentage terms, is aimed at thinking ahead about superintelligent AI. (ie, more of China’s “AI safety research” is probably focused on things like reducing LLM hallucinations, making sure it doesn’t make politically incorrect statements, etc.)
Where are the chinese equivalents of the American and British AISI government departments? Organizations like METR, Epoch, Forethought, MIRI, et cetera?
Who are some notable Chinese intellectuals / academics / scientists (along the lines of Yoshua Bengio or Geoffrey Hinton) who have made any public statements about the danger of potential AI x-risks?
Have any chinese labs published “responsible scaling plans” or tiers of “AI Safety Levels” as detailed as those from OpenAI, Deepmind, or Anthropic? Or discussed how they’re planning to approach the challenge of aligning superintelligence?
Have workers at any Chinese AI lab resigned in protest of poor AI safety policies (like the various people who’ve left OpenAI over the years), or resisted the militarization of AI technology (like googlers protesting Project Maven, or microsoft employees protesting the IVAS HMD program)?
When people ask this question about the relative value of “US” vs “Chinese” AI, they often go straight for big-picture political questions about whether the leadership of China or the US is more morally righteous, less likely to abuse human rights, et cetera. Personally, in these debates, I do tend to favor the USA, although certainly both the US and China have many deep and extremely troubling flaws—both seem very far from the kind of responsible, competent, benevolent entity to whom I would like to entrust humanity’s future.
But before we even get to that question of “What would national leaders do with an aligned superintelligence, if they had one,” we must answer the question “Do this nation’s AI labs seem likely to produce an aligned superintelligence?” Again, the USA leaves a lot to be desired here. But oftentimes China seems to not even be thinking about the problem. This is a huge issue from both a technical perspective (if you don’t have any kind of plan for how you’re going to align superintelligence, perhaps you are less likely to align superintelligence), AND from a governance perspective (if policymakers just think of AI as a tool for boosting economic / military progress and haven’t thought about the many unique implications of superintelligence, then they will probably make worse decisions during an extremely important period in history).
Now, indeed—has Trump thought about superintelligence? Obviously not—just trying to understand intelligent humans must be difficult for him. But the USA in general seems much more full of people who “take AI seriously” in one way or another—sillicon-valley CEOs, pentagon advisers, billionare philanthropists, et cetera. Even in today’s embarassing administration, there are very high-ranking people (like Elon Musk and J. D. Vance) who seem at least aware of the transformative potential of AI. China’s government is more opaque, so maybe they’re thinking about this stuff too. But all public evidence suggests to me that they’re kinda just blindly racing forward, trying to match and surpass the West on capabilities, without giving much thought as to where this technology might ultimately go.
The four questions you ask are excellent, since they get away from general differences of culture or political system, and address the processes that are actually producing Chinese AI.
The best reference I have so far is a May 2024 report from Concordia AI on “The State of AI Safety in China”. I haven’t even gone through it yet, but let me reproduce the executive summary here:
So clearly there is a discourse about AI safety there, that does sometimes extend even as far as the risk of extinction. It’s nowhere near as prominent or dramatic as it has been in the USA, but it’s there.