There doesn’t seem to be a consensus that ASI will be created in the next 5-10 years. This means that current technology leaders and their promises may be forgotten. Does anyone else remember Ben Goertzel and Novamente? Or Hugo de Garis?
1) the median and average timeline estimates have been getting shorter, not longer, by most measures,
and
2) no previous iteration of such claims was credible enough to attract hundreds of billions of dollars in funding, or meaningfully impact politics and geopolitics, or shift the global near-consensus that has held back nuclear power for generations. This suggests a difference in the strength of evidence for the claims in question.
Also 3) When adopted as a general principle of thought, this approach to reasoning about highly impactful emerging technologies is right in almost every case, except the ones that matter. There were many light bulbs before Edison, and many steels before Bessemer, but those things happened anyway, and each previous failure made the next attempt more likely to succeed, not less.
While history suggests we should be skeptical, current AI models produce real results of economic value, not just interesting demos. This suggests that we should be willing to take more seriously the possibility that they will be produce TAI since they are more clearly on that path and already having significant transformative effects on the world.
There doesn’t seem to be a consensus that ASI will be created in the next 5-10 years. This means that current technology leaders and their promises may be forgotten.
Does anyone else remember Ben Goertzel and Novamente? Or Hugo de Garis?
True, that can definitely happen, but consider
1) the median and average timeline estimates have been getting shorter, not longer, by most measures,
and
2) no previous iteration of such claims was credible enough to attract hundreds of billions of dollars in funding, or meaningfully impact politics and geopolitics, or shift the global near-consensus that has held back nuclear power for generations. This suggests a difference in the strength of evidence for the claims in question.
Also 3) When adopted as a general principle of thought, this approach to reasoning about highly impactful emerging technologies is right in almost every case, except the ones that matter. There were many light bulbs before Edison, and many steels before Bessemer, but those things happened anyway, and each previous failure made the next attempt more likely to succeed, not less.
I agree. But now people write so often about short timelines that it seems appropriate to recall the possible reason for the uncertainty.
While history suggests we should be skeptical, current AI models produce real results of economic value, not just interesting demos. This suggests that we should be willing to take more seriously the possibility that they will be produce TAI since they are more clearly on that path and already having significant transformative effects on the world.