AI Safety in China: Part 2

I’ve given things a lot more thought and wanted to make an edited summary of my views regarding AI Safety in China.

  1. AI Safety is almost nonexistent in China. The only toe-hold it has is rooted in the expat EA community, which is almost exclusively doing field building. This hasn’t been effective thus far, and we need to switch tactics. My best guess is that a total of 10-20 people have a job in China in the field of AI safety, most of them are expats, and most of them are EA.

  2. EA, for various cultural reasons, is a toxic brand in China. It’s not any single component of EA, but rather the idea of altruism itself. Ask anyone who’s lived in China for and few years and they will understand where I’m coming from. I think the best way forward for AI safety in China is to disassociate with EA. Rationality is more easily accepted, but spreading related ideas is not the most effective way to address AI safety in China.

  3. Blue-skies research, possibly in something legible like interpretability, is the best way I can think of to actually build the AI safety field in China. The best way of making a field high-status is to overpay researchers and have them tell their friends.

  4. Money talks. Top-tier talent is often willing to jump ship to startups for the right compensation. I might be biased here since I’ve only worked in smaller companies. But if I had a large bag of money, I could probably leverage familial academia/​business connections to hire star AI researchers.

  5. The Chinese government makes its decisions based on expert opinion and ideas from thought leaders in a particular field. The Chinese elites think like engineers and tend to take ideas seriously. AI safety isn’t considered as weird as it is in the West.

  6. Due to AI safety’s current lack of penetration in the Chinese sphere, I think any near-term attempts at an AI restriction treaty are unlikely to succeed—we don’t think it’s a real issue yet. Imagine if the Saudi King called the US President and tried to negotiate social media restrictions to prevent the Abrahamic End Times. That’s where we’re at. Then again, maybe someone in the Politburo is willing to take AI at least as seriously as Glenn Beck. It’s hard to tell.

  7. China doesn’t think like the West. Due to the Century of Humiliation, we are not going to be OK with being second best in a world-changing technology as we become the largest economy in the world. Our AI policy isn’t OK with second place in the long run. Any AI-restriction treaty that China will accept requires not just Chinese parity, but Chinese superiority in AI. Major tradeoffs, like the US coercing Taiwan into reunification, may be required for China to sign on.

  8. Then again, Beijing is hard to predict. It may agree to an AI disarmament treaty in 6 months, or it might confiscate private GPUs in an effort at mass mobilization, spending billions to build the next LLM. It might do both.

  9. There are vague purity tests in EA about only hiring people who are super-duper not going to work in AI capabilities ever for AI safety research. This is extremely repugnant to Chinese researchers.

On a personal level, I want to enter the field of AI safety immediately, and I think I would be more useful building the field in China as opposed to working in the US. However, I have only a single semester’s worth of graduate studies thus far and my application to go from non-degree-seeking to degree-seeking student status was rejected. I am probably going to take a few online ML courses over the next year before I feel ready enough to actually request funding, but am open to suggestions.

Again, there’s very few (<10) people working on technical alignment in China right now, and I feel a bit lost. Any advice is welcome.