That Eliezer Yudkowsky (the SIAI) is the right and only person who should be leading, respectively institution that should be working, to soften the above.
I don’t believe that is necessarily true, just that no one else is doing it. I think other teams working on FAI Specifically would be a good thing, provided they were competent enough not to be dangerous.
Likewise, Lesswrong (then Overcoming bias) is just the only place I’ve found that actually looked at the morality problem is a non-obviously wrong way. When I arrived I had a different view on morality than EY, but I was very happy to see another group of people at least working on the problem.
Also note that you only need to believe in the likelihood of UFAI -or- nanotech -or- other existential threats in order to want FAI . I’d have to step back a few feet to wrap my head around considering it infeasible at this point.
That Eliezer Yudkowsky (the SIAI) is the right and only person who should be leading, respectively institution that should be working, to soften the above.
That’s just a weird claim. When Richard Posner or David Chalmers does writing in the area SIAI folk cheer, not boo. And I don’t know anyone at SIAI who thinks that the Future of Humanity Institute’s work in the area isn’t a tremendously good thing.
Likewise, Lesswrong (then Overcoming bias) is just the only place I’ve found that actually looked at the morality problem is a non-obviously wrong way.
Have you looked into the philosophical literature?
Likewise, Lesswrong (then Overcoming bias) is just the only place I’ve found that actually looked at the morality problem is a non-obviously wrong way.
I don’t believe that is necessarily true, just that no one else is doing it. I think other teams working on FAI Specifically would be a good thing, provided they were competent enough not to be dangerous.
Likewise, Lesswrong (then Overcoming bias) is just the only place I’ve found that actually looked at the morality problem is a non-obviously wrong way. When I arrived I had a different view on morality than EY, but I was very happy to see another group of people at least working on the problem.
Also note that you only need to believe in the likelihood of UFAI -or- nanotech -or- other existential threats in order to want FAI . I’d have to step back a few feet to wrap my head around considering it infeasible at this point.
That’s just a weird claim. When Richard Posner or David Chalmers does writing in the area SIAI folk cheer, not boo. And I don’t know anyone at SIAI who thinks that the Future of Humanity Institute’s work in the area isn’t a tremendously good thing.
Have you looked into the philosophical literature?
I recommend http://atheistethicist.blogspot.com/ for this. (See the sidebar for links to an explanation of his metaethical theory.)