Define “massively wrong”. My personal opinions (stated w/o motivation for brevity):
Building AGI from scratch is likely to be unfeasible (although we don’t know nearly enough to discard the risk altogether)
Mind uploading is feasible (and morally desirable) but will trigger intelligence growth of marginal speed rather than a “foom”
“Correct” morality is low Kolmogorov complexity and conforms with radical forms of transhumanism
Infeasibility of “classical” AGI and feasibility of mind uploading should be scientifically provable.
So: My position is very different from MIRI’s. Nevertheless I think LessWrong is very interesting and useful (in particular I’m all for promoting rationality) and MIRI is doing very interesting and useful research. Does it count as “massively wrong”?
Define “massively wrong”. My personal opinions (stated w/o motivation for brevity):
Building AGI from scratch is likely to be unfeasible (although we don’t know nearly enough to discard the risk altogether)
Mind uploading is feasible (and morally desirable) but will trigger intelligence growth of marginal speed rather than a “foom”
“Correct” morality is low Kolmogorov complexity and conforms with radical forms of transhumanism
Infeasibility of “classical” AGI and feasibility of mind uploading should be scientifically provable.
So: My position is very different from MIRI’s. Nevertheless I think LessWrong is very interesting and useful (in particular I’m all for promoting rationality) and MIRI is doing very interesting and useful research. Does it count as “massively wrong”?