This is a great list, thank you for compiling it. I think it seems to have a major deficit though—it seems to focus on short-AI-timelines predictions? There are tons of people making long-AI-timelines predictions, why aren’t their predictions being recorded? (i.e. people saying things to the effect of “AGI isn’t coming soon” or “Probability of AGI within 3 years <30%” or “progress will slow significantly soon due to the data bottleneck.”)
Maybe those don’t stick out to me because long timelines seems like the default hypothesis to me, and there’s a lot of people stating specific, falsifiable short timelines predictions locally so there’s a selection effect. I added Brian Chau and Robin Hanson to the list though, not sure who else (other than me) has made specific long timelines predictions who would be good to add. Would like to add people like Yann LeCun and Andrew Ng if there are specific falsifiable predictions they made.
If AGI happens in, say, 2027, all those long-timelines people will be shown to be wrong, some of them massively so (e.g. I know of multiple distinguished experts and forecasters who put <1% on AGI by 2027, sometimes <<)
If that doesn’t count as a specific or falsifiable prediction they are making, whilst the things you quote above do count, I’m curious what you mean by specific and falsifiable.
Also, I suspect we have some disagreement to be uncovered about this default hypothesis business. I worry that by only picking the forecasts of one side you introduce some sort of bias.
Do you know if Andrew Ng or Yann LeCun has made a specific prediction that AGI won’t arrive by some date? Couldn’t find it through a quick search. Idk what others to include.
In his AI Insight Forum statement, Andrew Ng puts 1% on “This rogue AI system gains the ability (perhaps access to nuclear weapons, or skill at manipulating people into using such weapons) to wipe out humanity” in the next 100 years (conditional on a rogue AI system that doesn’t go unchecked by other AI systems existing). And overall 1 in 10 million of AI causing extinction in the next 100 years.
This is a great list, thank you for compiling it. I think it seems to have a major deficit though—it seems to focus on short-AI-timelines predictions? There are tons of people making long-AI-timelines predictions, why aren’t their predictions being recorded? (i.e. people saying things to the effect of “AGI isn’t coming soon” or “Probability of AGI within 3 years <30%” or “progress will slow significantly soon due to the data bottleneck.”)
Maybe those don’t stick out to me because long timelines seems like the default hypothesis to me, and there’s a lot of people stating specific, falsifiable short timelines predictions locally so there’s a selection effect. I added Brian Chau and Robin Hanson to the list though, not sure who else (other than me) has made specific long timelines predictions who would be good to add. Would like to add people like Yann LeCun and Andrew Ng if there are specific falsifiable predictions they made.
Tsvi comes to mind: https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce
Added
If AGI happens in, say, 2027, all those long-timelines people will be shown to be wrong, some of them massively so (e.g. I know of multiple distinguished experts and forecasters who put <1% on AGI by 2027, sometimes <<)
If that doesn’t count as a specific or falsifiable prediction they are making, whilst the things you quote above do count, I’m curious what you mean by specific and falsifiable.
Also, I suspect we have some disagreement to be uncovered about this default hypothesis business. I worry that by only picking the forecasts of one side you introduce some sort of bias.
Do you know if Andrew Ng or Yann LeCun has made a specific prediction that AGI won’t arrive by some date? Couldn’t find it through a quick search. Idk what others to include.
In his AI Insight Forum statement, Andrew Ng puts 1% on “This rogue AI system gains the ability (perhaps access to nuclear weapons, or skill at manipulating people into using such weapons) to wipe out humanity” in the next 100 years (conditional on a rogue AI system that doesn’t go unchecked by other AI systems existing). And overall 1 in 10 million of AI causing extinction in the next 100 years.
Thanks, added.
I don’t know. But here’s an example of the sort of thing I’m talking about: Transformative AGI by 2043 is <1% likely — LessWrong
More generally you can probably find people expressing strong disagreement or outright dismissal of various short-timelines predictions.
Ok, I added this prediction.