If AGI happens in, say, 2027, all those long-timelines people will be shown to be wrong, some of them massively so (e.g. I know of multiple distinguished experts and forecasters who put <1% on AGI by 2027, sometimes <<)
If that doesn’t count as a specific or falsifiable prediction they are making, whilst the things you quote above do count, I’m curious what you mean by specific and falsifiable.
Also, I suspect we have some disagreement to be uncovered about this default hypothesis business. I worry that by only picking the forecasts of one side you introduce some sort of bias.
Do you know if Andrew Ng or Yann LeCun has made a specific prediction that AGI won’t arrive by some date? Couldn’t find it through a quick search. Idk what others to include.
In his AI Insight Forum statement, Andrew Ng puts 1% on “This rogue AI system gains the ability (perhaps access to nuclear weapons, or skill at manipulating people into using such weapons) to wipe out humanity” in the next 100 years (conditional on a rogue AI system that doesn’t go unchecked by other AI systems existing). And overall 1 in 10 million of AI causing extinction in the next 100 years.
If AGI happens in, say, 2027, all those long-timelines people will be shown to be wrong, some of them massively so (e.g. I know of multiple distinguished experts and forecasters who put <1% on AGI by 2027, sometimes <<)
If that doesn’t count as a specific or falsifiable prediction they are making, whilst the things you quote above do count, I’m curious what you mean by specific and falsifiable.
Also, I suspect we have some disagreement to be uncovered about this default hypothesis business. I worry that by only picking the forecasts of one side you introduce some sort of bias.
Do you know if Andrew Ng or Yann LeCun has made a specific prediction that AGI won’t arrive by some date? Couldn’t find it through a quick search. Idk what others to include.
In his AI Insight Forum statement, Andrew Ng puts 1% on “This rogue AI system gains the ability (perhaps access to nuclear weapons, or skill at manipulating people into using such weapons) to wipe out humanity” in the next 100 years (conditional on a rogue AI system that doesn’t go unchecked by other AI systems existing). And overall 1 in 10 million of AI causing extinction in the next 100 years.
Thanks, added.
I don’t know. But here’s an example of the sort of thing I’m talking about: Transformative AGI by 2043 is <1% likely — LessWrong
More generally you can probably find people expressing strong disagreement or outright dismissal of various short-timelines predictions.
Ok, I added this prediction.