Strategic High Skill Immigration seems to be a very high quality and relevant post that has been overlooked by most people here, perhaps due to its length and rather obscure title. (Before I strongly upvoted it, it had just 12 points and 4 voters, presumably including the author.) If a moderator sees this, please consider curating it so more people will read it. And for everyone else, I suggest reading at least part 1. It talks about why global coordination on AI safety and other x-risks is hard, and suggests a creative solution for making it a little easier. Part 2 is much longer and goes into a lot of arguments, counterarguments, and counter-counterarguments, and can perhaps be skipped unless you have an interest in this area.
This was an interesting post. However, given Google’s rocky history with DARPA, I’m not convinced a high concentration of AI researchers in the US would give the US government a lead in AI.
The author suggests that just slowing down research into risky technologies in other countries would be worthwhile:
The lack of acceleration of science following the high skill immigration shock to the US is not necessarily bad news: it may also imply that future shocks won’t accelerate risky technologies, that research funding is a more fundamental constraint, or that other sectors of the economy are better at absorbing high skill immigrants.
Further emigration likely decelerated progress for potentially risky technologies in the former USSR, which is a net reduction of risk: there is less incentive for the US government to engage in an arms race if there is no one to race.
Strategic High Skill Immigration seems to be a very high quality and relevant post that has been overlooked by most people here, perhaps due to its length and rather obscure title. (Before I strongly upvoted it, it had just 12 points and 4 voters, presumably including the author.) If a moderator sees this, please consider curating it so more people will read it. And for everyone else, I suggest reading at least part 1. It talks about why global coordination on AI safety and other x-risks is hard, and suggests a creative solution for making it a little easier. Part 2 is much longer and goes into a lot of arguments, counterarguments, and counter-counterarguments, and can perhaps be skipped unless you have an interest in this area.
This was an interesting post. However, given Google’s rocky history with DARPA, I’m not convinced a high concentration of AI researchers in the US would give the US government a lead in AI.
The author suggests that just slowing down research into risky technologies in other countries would be worthwhile: