Goertzel: I could and will list the errors I see in his arguments (if nobody there has done so first). For now I’ll just say his response to claim #2 seems to conflate humans and AIs. But unless I’ve missed something big, which certainly seems possible, he didn’t make his decision based on those arguments. They don’t seem good enough on their face to convince anyone. For example, I don’t think he could really believe that he and other researchers would unconsciously restrict the AI’s movement in the space of possible minds to the safe area(s), but if we reject that possibility some version of #4 seems to follow logically from 1 and 2.
Egan: don’t know. What I’ve seen looks unimpressive, though certainly he has reason to doubt ‘transhumanist’ predictions for the near future. (SIAI instead seems to assume that if humans can produce AGI, then either we’ll do so eventually or we’ll die out first. Also, that we could produce artificial X-maximizing intelligence more easily then we can produce artificial nearly-any-other-human-trait, which seems likely based on the tool I use to write this and the history of said tool.) Do you have a particular statement or implied statement of his in mind?
Hanson: maybe I shouldn’t point any of this out, but EY started by pursuing a Heinlein Hero quest to save the world through his own rationality. He then found himself compelled to reinvent democracy and regulation (albeit in a form closely tailored to the case at hand and without any strict logical implications for normal politics). His conservative/libertarian economist friend called these new views wrongheaded despite verbally agreeing with him that EY should act on those views. Said friend also posted a short essay about “heritage” that allowed him to paint those who disagreed with his particular libertarian vision as egg-headed elitists.
He wasn’t quoting Goertzel, Egan, and Hanson—though his formatting made it look like he was. He was commenting on your claim that these three “don’t see it”.
Sorry, I don’t know what quotes you mean. You can find a link to the “heritage” post in the wiki-compilation of the debate. Though perhaps you meant to reply to someone else?
Never mind, I just skimmed over it and thought you were quoting someone. If you delete your comment I’ll delete this one. I’ll read your orginal comment again now.
Goertzel: I could and will list the errors I see in his arguments (if nobody there has done so first). For now I’ll just say his response to claim #2 seems to conflate humans and AIs. But unless I’ve missed something big, which certainly seems possible, he didn’t make his decision based on those arguments. They don’t seem good enough on their face to convince anyone. For example, I don’t think he could really believe that he and other researchers would unconsciously restrict the AI’s movement in the space of possible minds to the safe area(s), but if we reject that possibility some version of #4 seems to follow logically from 1 and 2.
Egan: don’t know. What I’ve seen looks unimpressive, though certainly he has reason to doubt ‘transhumanist’ predictions for the near future. (SIAI instead seems to assume that if humans can produce AGI, then either we’ll do so eventually or we’ll die out first. Also, that we could produce artificial X-maximizing intelligence more easily then we can produce artificial nearly-any-other-human-trait, which seems likely based on the tool I use to write this and the history of said tool.) Do you have a particular statement or implied statement of his in mind?
Hanson: maybe I shouldn’t point any of this out, but EY started by pursuing a Heinlein Hero quest to save the world through his own rationality. He then found himself compelled to reinvent democracy and regulation (albeit in a form closely tailored to the case at hand and without any strict logical implications for normal politics). His conservative/libertarian economist friend called these new views wrongheaded despite verbally agreeing with him that EY should act on those views. Said friend also posted a short essay about “heritage” that allowed him to paint those who disagreed with his particular libertarian vision as egg-headed elitists.
From where you got those quotes? References?
He wasn’t quoting Goertzel, Egan, and Hanson—though his formatting made it look like he was. He was commenting on your claim that these three “don’t see it”.
Whoops, I’m sorry, never mind.
Sorry, I don’t know what quotes you mean. You can find a link to the “heritage” post in the wiki-compilation of the debate. Though perhaps you meant to reply to someone else?
Never mind, I just skimmed over it and thought you were quoting someone. If you delete your comment I’ll delete this one. I’ll read your orginal comment again now.