Well, I tend to think that that working on and supporting machine intelligence research is probably the most important way to positively influence the future of civilisation. The issue of what we want the machines to do is a part of the project.
So, such beliefs don’t seem particularly “far out”—to me.
FWIW, Yudkowsky describes his motivation in writing about rationality here:
Well, I tend to think that that working on and supporting machine intelligence research is probably the most important way to positively influence the future of civilisation. The issue of what we want the machines to do is a part of the project.
So, such beliefs don’t seem particularly “far out”—to me.
FWIW, Yudkowsky describes his motivation in writing about rationality here:
http://lesswrong.com/lw/66/rationality_common_interest_of_many_causes/