Were there ever any references identifying the Scary Idea as an official SIAI belief?
I think that—if they comment at all—they would come back with something like:
OK—so you don’t think that unconstrained machine intelligence is “highly likely” to automatically DESTROY ALL LIFE AS WE KNOW IT. So: what do you think the chances of that happening are?!?
Does Eliezer believe that working on friendly AI and supporting friendly AI research is the most important and most rational way to positively influence the future of humanity? If he thinks so, then is it reasonable to suspect that his rationale for starting to write on matters of rationality was to plead his case for friendly AI research and convince other people that it is indeed the most effective way to help humankind? If not, what was his reason to start blogging on Overcoming Bias and Less Wrong? Why has he spent so much time helping people to become less wrong rather than working directly on friendly AI? How can you be less wrong and still doubt that you should support friendly AI research?
I still suspect that everything he does is a means to an end. I’m also the opinion that if one reads all of Less Wrong and is afterwards (in the case one wants to survive and benefit humanity) still unable to conclude that the best way to do so is by supporting the SIAI, then either one did not understand due to a lack of intelligence or Less Wrong failed to convey its most important message. Therefore you should listen to the people who have read Less Wrong and disagree. You should also try to reach the people who haven’t read Less Wrong but should read it because they are in a position that makes it necessary for them to understand the issues in question.
Well, I tend to think that that working on and supporting machine intelligence research is probably the most important way to positively influence the future of civilisation. The issue of what we want the machines to do is a part of the project.
So, such beliefs don’t seem particularly “far out”—to me.
FWIW, Yudkowsky describes his motivation in writing about rationality here:
Were there ever any references identifying the Scary Idea as an official SIAI belief?
I think that—if they comment at all—they would come back with something like:
Does Eliezer believe that working on friendly AI and supporting friendly AI research is the most important and most rational way to positively influence the future of humanity? If he thinks so, then is it reasonable to suspect that his rationale for starting to write on matters of rationality was to plead his case for friendly AI research and convince other people that it is indeed the most effective way to help humankind? If not, what was his reason to start blogging on Overcoming Bias and Less Wrong? Why has he spent so much time helping people to become less wrong rather than working directly on friendly AI? How can you be less wrong and still doubt that you should support friendly AI research?
I still suspect that everything he does is a means to an end. I’m also the opinion that if one reads all of Less Wrong and is afterwards (in the case one wants to survive and benefit humanity) still unable to conclude that the best way to do so is by supporting the SIAI, then either one did not understand due to a lack of intelligence or Less Wrong failed to convey its most important message. Therefore you should listen to the people who have read Less Wrong and disagree. You should also try to reach the people who haven’t read Less Wrong but should read it because they are in a position that makes it necessary for them to understand the issues in question.
Well, I tend to think that that working on and supporting machine intelligence research is probably the most important way to positively influence the future of civilisation. The issue of what we want the machines to do is a part of the project.
So, such beliefs don’t seem particularly “far out”—to me.
FWIW, Yudkowsky describes his motivation in writing about rationality here:
http://lesswrong.com/lw/66/rationality_common_interest_of_many_causes/