Edit: Turns out I misunderstood Greg Egan, and probably Eliezer Yudkowsky. What I thought was Egan’s position is Aaronson’s unless I misunderstood him too.
Paraphrase of Greg Egan’s position (if I and XiXiDu understand correctly): “Given enough time, humans can understand anything. In practice we still get squashed by AIs, since they’re much faster, but slow them down and we’re equals.”
Paraphrase of Eliezer Yudkowsky’s position (same disclaimer): “There are things that humans simply cannot understand, ever, no matter how long it takes, but that other minds can understand.” (I’m not sure what happens if you brute-force insightspace.)
I think that your impressions are at least implicitly inaccurate, unless your quote marks are actually indicating quotes I haven’t seen. (If not, perhaps you should paraphrase in a way that doesn’t look like direct quoting?) Greg Egan thinks that AIs are not a problem even considering (and dismissing as impossible?) their speed advantage, as far as I can tell. So, practically speaking, he thinks this uFAI alarmism is wrong and maybe contemptible, again as far as I can tell. Eliezer’s impression might be that there are things humans can never understand, but if so that’s probably because the word ‘human’ typically refers to a structure that is defined in many ways by its boundedness. That is, maybe a human could follow a superintelligent argument if the human was upgraded with a Jupiter brain, but calling such a human a human might be stretching definitions. But maybe Eliezer does in fact have deeper objections, I’m not sure.
Edit: Turns out I misunderstood Greg Egan, and probably Eliezer Yudkowsky. What I thought was Egan’s position is Aaronson’s unless I misunderstood him too.
Paraphrase of Greg Egan’s position (if I and XiXiDu understand correctly): “Given enough time, humans can understand anything. In practice we still get squashed by AIs, since they’re much faster, but slow them down and we’re equals.”
Paraphrase of Eliezer Yudkowsky’s position (same disclaimer): “There are things that humans simply cannot understand, ever, no matter how long it takes, but that other minds can understand.” (I’m not sure what happens if you brute-force insightspace.)
arguments about the human mindspace in toto are silly at this juncture in our understanding.
I think that your impressions are at least implicitly inaccurate, unless your quote marks are actually indicating quotes I haven’t seen. (If not, perhaps you should paraphrase in a way that doesn’t look like direct quoting?) Greg Egan thinks that AIs are not a problem even considering (and dismissing as impossible?) their speed advantage, as far as I can tell. So, practically speaking, he thinks this uFAI alarmism is wrong and maybe contemptible, again as far as I can tell. Eliezer’s impression might be that there are things humans can never understand, but if so that’s probably because the word ‘human’ typically refers to a structure that is defined in many ways by its boundedness. That is, maybe a human could follow a superintelligent argument if the human was upgraded with a Jupiter brain, but calling such a human a human might be stretching definitions. But maybe Eliezer does in fact have deeper objections, I’m not sure.