19 How difficult is stabilizing the world so we can work on Friendly AI slowly?
Virtually impossible. The people working on AI number in the thousands, and not even governments can stop technological progress. There are probably routes to discourage AI funding, or make such work unpopular, but to talk of “stabilizing the world” is way beyond what any group of people can do.
6 What can we do to attract more funding, support, and research to x-risk reduction and to the specific sub-problems of successful Singularity navigation?
I think this is the most important question by far.
5 What can we do to raise the “sanity waterline,” and how much will this help?
I really think that trying to raise the “sanity waterline”, which refers to the average sanity of all people, is an enormous task which, while noble, would be a waste of time. We simply don’t have the time to do it. We should focus on the people that can possibly accidentally create UFAI—academics in machine learning programs, researchers with access to supercomputers, et cetera.
If anyone disagrees, I would love to hear some evidence against what I’ve said. This stuff is way too complicated for me to be really confident.
I’ll take you up on the disagreement. In the next 40 years, I find it very unlikely that any form of AI will be developed. Furthermore, we do want technological improvement in machine learning fields because of the advantages that can be offered in fields like self-driving cars, assisting doctors in diagnosing and treating illnesses, and many other fields.
And, because of the >40 year timeline, it will most likely be the next generation that leads the discovery to AI/FAI. So, we can’t target particular people (although we can focus on those who are likely to have children to go into AI, which is happening as a result of this site’s existence). This means that raising the overall waterline is probably one of the best ways to go about doing this, because children who grow up in a more rational culture are more likely to be rational as well.
Great list! Here are some of my beliefs;
Virtually impossible. The people working on AI number in the thousands, and not even governments can stop technological progress. There are probably routes to discourage AI funding, or make such work unpopular, but to talk of “stabilizing the world” is way beyond what any group of people can do.
I think this is the most important question by far.
I really think that trying to raise the “sanity waterline”, which refers to the average sanity of all people, is an enormous task which, while noble, would be a waste of time. We simply don’t have the time to do it. We should focus on the people that can possibly accidentally create UFAI—academics in machine learning programs, researchers with access to supercomputers, et cetera.
If anyone disagrees, I would love to hear some evidence against what I’ve said. This stuff is way too complicated for me to be really confident.
I’ll take you up on the disagreement. In the next 40 years, I find it very unlikely that any form of AI will be developed. Furthermore, we do want technological improvement in machine learning fields because of the advantages that can be offered in fields like self-driving cars, assisting doctors in diagnosing and treating illnesses, and many other fields.
And, because of the >40 year timeline, it will most likely be the next generation that leads the discovery to AI/FAI. So, we can’t target particular people (although we can focus on those who are likely to have children to go into AI, which is happening as a result of this site’s existence). This means that raising the overall waterline is probably one of the best ways to go about doing this, because children who grow up in a more rational culture are more likely to be rational as well.
“Any form of AI”? You mean superhuman AI?
Sorry, yeah.