I’m saying that funding technical alignment in China is importnat for 2 reasons: firstly, it helps build a community of people interested in the field, which helps sway elite opinion and ultimately policy. Secondly, it can contribute to overall progress in AI alignment. In my opinion, the former is more important and time-critical, as other efforts at community building have not been very successful thus far, and the process of fieldbuilding → elite opinion shift takes time.
I’m planning on making a detailed post about why EA/altruism-in-general is a bad match for China, with a lot of citations.
Some big names from China signed off on that recent one-sentence statement about extinction risk due to AI… Does that affect your sense of how receptive the Chinese establishment is, to AI safety as a field of research?
I’m saying that funding technical alignment in China is importnat for 2 reasons: firstly, it helps build a community of people interested in the field, which helps sway elite opinion and ultimately policy. Secondly, it can contribute to overall progress in AI alignment. In my opinion, the former is more important and time-critical, as other efforts at community building have not been very successful thus far, and the process of fieldbuilding → elite opinion shift takes time.
I’m planning on making a detailed post about why EA/altruism-in-general is a bad match for China, with a lot of citations.
Some big names from China signed off on that recent one-sentence statement about extinction risk due to AI… Does that affect your sense of how receptive the Chinese establishment is, to AI safety as a field of research?