I think it will help if you can just be clear on what you want for yourself, China, and the world. You’re worried about runaway AI, but is the answer (1) a licensing regime that makes very advanced AI simply illegal, or (2) theoretical and practical progress in “alignment” that can make even very advanced AI to be safe? Or do you just want there to be an intellectual culture that acknowledges the problem, and paves the way for all types of solutions to be pursued? If you can be clear in your own mind, about what your opinions are, then you can forthrightly express them, even as you develop a career in AI that might be more conventional to begin with.
Concerning the rivalry between China and America, (1) and (2) have different implications. If the answer is (1), then the strategy for China is to build up its own AI capabilities within the threshold of safety, while allying with those forces in America who also want to put a safe ceiling on American AI capabilities. If the answer is (2), then the priority is for all the most advanced research efforts, to also have the most advanced safety methodologies, which is potentially an even more cooperative situation.
Concerning the association in the West between AI safety and effective altruism, again, I think it helps to be clear in your own mind. If you perceive clearly why you reject EA but still see merit in AI safety, surely you can explain the logic of that position to someone else too.
I’m saying that funding technical alignment in China is importnat for 2 reasons: firstly, it helps build a community of people interested in the field, which helps sway elite opinion and ultimately policy. Secondly, it can contribute to overall progress in AI alignment. In my opinion, the former is more important and time-critical, as other efforts at community building have not been very successful thus far, and the process of fieldbuilding → elite opinion shift takes time.
I’m planning on making a detailed post about why EA/altruism-in-general is a bad match for China, with a lot of citations.
Some big names from China signed off on that recent one-sentence statement about extinction risk due to AI… Does that affect your sense of how receptive the Chinese establishment is, to AI safety as a field of research?
I think it will help if you can just be clear on what you want for yourself, China, and the world. You’re worried about runaway AI, but is the answer (1) a licensing regime that makes very advanced AI simply illegal, or (2) theoretical and practical progress in “alignment” that can make even very advanced AI to be safe? Or do you just want there to be an intellectual culture that acknowledges the problem, and paves the way for all types of solutions to be pursued? If you can be clear in your own mind, about what your opinions are, then you can forthrightly express them, even as you develop a career in AI that might be more conventional to begin with.
Concerning the rivalry between China and America, (1) and (2) have different implications. If the answer is (1), then the strategy for China is to build up its own AI capabilities within the threshold of safety, while allying with those forces in America who also want to put a safe ceiling on American AI capabilities. If the answer is (2), then the priority is for all the most advanced research efforts, to also have the most advanced safety methodologies, which is potentially an even more cooperative situation.
Concerning the association in the West between AI safety and effective altruism, again, I think it helps to be clear in your own mind. If you perceive clearly why you reject EA but still see merit in AI safety, surely you can explain the logic of that position to someone else too.
I’m saying that funding technical alignment in China is importnat for 2 reasons: firstly, it helps build a community of people interested in the field, which helps sway elite opinion and ultimately policy. Secondly, it can contribute to overall progress in AI alignment. In my opinion, the former is more important and time-critical, as other efforts at community building have not been very successful thus far, and the process of fieldbuilding → elite opinion shift takes time.
I’m planning on making a detailed post about why EA/altruism-in-general is a bad match for China, with a lot of citations.
Some big names from China signed off on that recent one-sentence statement about extinction risk due to AI… Does that affect your sense of how receptive the Chinese establishment is, to AI safety as a field of research?