I believe that more likely than not, the reason why Eliezer has missed the point that I raise in this post is social naivete on his part rather than willful self-deception.
I find it impossible to believe that the author of Harry Potter and the Methods of Rationality is oblivious to the first impression he creates. However, I can well believe that he imagines it to be a minor handicap which will fade in importance with continued exposure to his brilliance (as was the fictional case with HP). The unacknowledged problem in the non-fictional case, of course, is in maintaining that continued exposure.
I am personally currently skeptical that the singularity represents existential risk. But having watched Eliezer completely confuse and irritate Robert Wright, and having read half of the “debate” with Hanson, I am quite willing to hypothesize that the explanation of what the singularity is (and why we should be nervous about it) ought to come from anybody but Eliezer. He speaks and writes clearly on many subjects, but not that one.
Perhaps he would communicate more successfully on this topic if he tried a dialog format. But it would have to be one in which his constructed interlocutors are convincing opponents, rather than straw men.
It depends on exactly what you mean by “existential risk”. Development will likely—IMO—create genetic and phenotypic takeovers in due course—as the bioverse becomes engineered. That will mean no more “wild” humans.
That is something which some people seem to wail and wave their hands about—talking about the end of the human race.
The end of earth-originating civilisation seems highly unlikely to me too—which is not to say that the small chance of it is not significant enough to discuss.
I find it impossible to believe that the author of Harry Potter and the Methods of Rationality is oblivious to the first impression he creates. However, I can well believe that he imagines it to be a minor handicap which will fade in importance with continued exposure to his brilliance (as was the fictional case with HP). The unacknowledged problem in the non-fictional case, of course, is in maintaining that continued exposure.
I am personally currently skeptical that the singularity represents existential risk. But having watched Eliezer completely confuse and irritate Robert Wright, and having read half of the “debate” with Hanson, I am quite willing to hypothesize that the explanation of what the singularity is (and why we should be nervous about it) ought to come from anybody but Eliezer. He speaks and writes clearly on many subjects, but not that one.
Perhaps he would communicate more successfully on this topic if he tried a dialog format. But it would have to be one in which his constructed interlocutors are convincing opponents, rather than straw men.
It depends on exactly what you mean by “existential risk”. Development will likely—IMO—create genetic and phenotypic takeovers in due course—as the bioverse becomes engineered. That will mean no more “wild” humans.
That is something which some people seem to wail and wave their hands about—talking about the end of the human race.
The end of earth-originating civilisation seems highly unlikely to me too—which is not to say that the small chance of it is not significant enough to discuss.
Eliezer’s main case for that appears to be on http://lesswrong.com/lw/y3/value_is_fragile/
I think that document is incoherent.