My Assessment of the Chinese AI Safety Community
I’ve heard people be somewhat optimistic about this AI guideline from China. They think that this means Beijing is willing to participate in an AI disarmament treaty due to concerns over AI risk. Eliezer noted that China is where the US was a decade ago in regards to AI safety awareness, and expresses genuine hope that his ideas of an AI pause can take place with Chinese buy-in.
I also note that no one expressing these views understands China well. This is a PR statement. It is a list of feel-good statements that Beijing publishes after any international event. No one in China is talking about it. They’re talking about how much the Baidu LLM sucks in comparison to ChatGPT. I think most arguments about how this statement is meaningful are based fundamentally on ignorance—“I don’t know how Beijing operates or thinks, so maybe they agree with my stance on AI risk!”
Remember that these are regulatory guidelines. Even if they all become law and are strictly enforced, they are simply regulations on AI data usage and training. Not a signal that a willingness for an AI-reduction treaty is there. It is far more likely that Beijing sees near-term AI as a potential threat to stability that needs to be addressed with regulation. A domestic regulation framework for nuclear power is not a strong signal for a willingness to engage in nuclear arms reduction.
Maybe it is true that AI risk in China is where it was in the US in 2004. But the US 2004 state was also similar to the US 1954 state, so the comparison might not mean that much. And we are not Americans. Weird ideas are penalized a lot more harshly here. Do you really think that a scientist is going to walk up to his friend from the Politburo and say “Hey, I know AI is a central priority of ours, but there are a few fringe scientists in the US asking for treaties limiting AI, right as they are doing their hardest to cripple our own AI development. Yes, I believe they are acting in good faith, they’re even promising to not widen the current AI gap they have with us!” Well, China isn’t in this race for parity or to be second best. China wants to win. But that’s for another post.
Remember that Chinese scientists are used to interfacing with our Western counterparts and know to say the right words like “diversity”, “inclusion”, and “no conflict of interest” that it takes to get our papers published. Just because someone at Beida makes a statement in one of their papers doesn’t mean the intelligentsia is taking this seriously. I’ve looked through the EA/Rationalist/AI Safety forums in China, and they’re mostly populated by expats or people physically outside of China. Most posts are in English, and they’re just repeating/translating Western AI Safety concepts. A “moonshot idea” I saw brought up is getting Yudkowsky’s Harry Potter fanfiction translated into Chinese (please never ever do this). The only significant AI safety group is Anyuan(安远), and they’re only working on field-building. Also, there is only one group doing technical alignment work in China, the founder was paying for everything out of pocket and was unable to navigate Western non-profit funding. I’ve still not figured out why he wasn’t getting funding from Chinese EA people (my theory is that both sides assume that if funding was needed, the other side would have already contacted them).
You can’t just hope an entire field into being in China. Chinese EAs have been doing field-building for the past 5+ years, and I see no field. If things keep on this trajectory, it will be the same in 5 more years. The main reason I could find is the lack of interfaces, people who can navigate both the Western EA sphere and the Chinese technical sphere. In many ways, the very concept of EA is foreign and repulsive to the Chinese mindset—I’ve heard Chinese describe an American’s reason to go to college (wanting to change the world) as childishly naive and utterly impractical. This is a very common view here and I think it makes approaching alignment from an altruistic perspective doomed to fail. However, there are many bright Chinese students and researchers recently laid off who are eager to get into the “next big thing”, especially on the ground floor. Maybe we can work with that.
I mostly made this post because I want to brainstorm possible ideas/solutions, so please comment if you have any insights.
Edit: I would really appreciate it if someone could get me on a podcast to discuss these ideas.