Experience tells us to discount predictions of imminent AGI, to the point where only the strongest of reasons can overcome this. If AIXI represented a large enough increase in understanding of what we’re even talking about, that could be part of a strong argument. But as I said in the great-grandparent, it doesn’t.
Past predictive accuracy of expert opinions on the subject if AI superintelligence tells us nothing about what to infer from current predictions. If superintelligent AI were to actually arrive tomorrow, or 50 years from now, or 150 years from now, there would be no discernable difference in present expert opinion. On these sorts of subjects expert opinion is totally uncorrolated from reality. So no, experience tells us nothing about predictions of imminent or non-imminent AGI. We can thank our own Stuart Armstrong for this contribution.
But hey, let’s take 2070 at face value. That’d be great news! We could completely forget about the existential threat due to unfriendly AI. After all, it’d be decades after even pessimistic estimates for whole-brain emulation[1] enables the first uploaded human intelligences. And a decade or so further after atomicly precise manufacturing[2] gives us the tools to do in-vivo[3] intelligence enhancement. By 2070 we’d already be in a world of human-derived superintellences, so thankfully we needn’t fret over our own biological limitations preventing us from keeping pace with superintelligent AI.
I roughly agree with Luke—that would be the director of MIRI—in placing the median close to 2070.
What about the second half of the question, why?
Seriously?
Experience tells us to discount predictions of imminent AGI, to the point where only the strongest of reasons can overcome this. If AIXI represented a large enough increase in understanding of what we’re even talking about, that could be part of a strong argument. But as I said in the great-grandparent, it doesn’t.
Past predictive accuracy of expert opinions on the subject if AI superintelligence tells us nothing about what to infer from current predictions. If superintelligent AI were to actually arrive tomorrow, or 50 years from now, or 150 years from now, there would be no discernable difference in present expert opinion. On these sorts of subjects expert opinion is totally uncorrolated from reality. So no, experience tells us nothing about predictions of imminent or non-imminent AGI. We can thank our own Stuart Armstrong for this contribution.
But hey, let’s take 2070 at face value. That’d be great news! We could completely forget about the existential threat due to unfriendly AI. After all, it’d be decades after even pessimistic estimates for whole-brain emulation[1] enables the first uploaded human intelligences. And a decade or so further after atomicly precise manufacturing[2] gives us the tools to do in-vivo[3] intelligence enhancement. By 2070 we’d already be in a world of human-derived superintellences, so thankfully we needn’t fret over our own biological limitations preventing us from keeping pace with superintelligent AI.
Or is that not the future you imagined?
http://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf
https://www.foresight.org/roadmaps/Nanotech_Roadmap_2007_main.pdf
http://www.nanomedicine.com/