It’s not obvious to me that Ajeya’s timelines aged worse than Eliezer’s. In 2020, Ajeya’s median estimate for transformative AI was 2050. My guess is that if based on this her estimate for “an AI that can, if it wants, kill all humans and run the economy on its own without major disruptions” would have been like 2056? I might be wrong, people who knew her views better at the time can correct me.
As far as I know, Eliezer never made official timeline predictions, but in 2017 he made an even-odds bet with Bryan Caplan that AI would kill everyone by January 1, 2030. And in December 2022, just after ChatGPT, he tweeted:
Pouring some cold water on the latest wave of AI hype: I could be wrong, but my guess is that we do *not* get AGI just by scaling ChatGPT, and that it takes *surprisingly* long from here. Parents conceiving today may have a fair chance of their child living to see kindergarten.
I think child conceived in December 2022 would go to kindergarten in September 2028 (though I’m not very familiar with the US kindergarten system). Generously interpreting “may have a fair chance” as a median, this is a late 2028 median for AI killing everyone.
Unfortunately, both these Eliezer predictions are kind of made as part of jokes (he said at the time that the bet wasn’t very serious). But I think we shouldn’t reward people for only making joking predictions instead of 100-page reports, so I think we should probably accept 2028-2030 as Eliezer’s median at the time.
I think if “an AI that can, if it wants, kill all humans and run the economy on its own without major disruptions” comes before 2037, Eliezer’s prediction will fare better, if it comes after that, then Ajeya’s prediction will fare better. I’m currently about 55% that we will get such AI by 2037, so from my current standpoint I consider Eliezer to be mildly ahead, but only very mildly.
I think an important point is that people can be wrong about timelines in both directions. Anthropic’s official public prediction is that they expect “country of geniuses in a data center” by early 2027. I heard that previously Dario predicted AGI to come even earlier, by 2024 (though I can’t find any source for this now and would be grateful if someone found a source or corrected me that I’m misremembering). Situational Awareness predicts AGI by 2027. The AI safety community’s most successful public output is called AI 2027. These are not fringe figures but some of the most prominent voices in the broader AI safety community. If their timelines turn out to be much too short (as I currently expect), then I think Ajeya’s predictions deserve credit for pushing against these voices, and not only blame for stating a too long timeline.
And I feel it’s not really true that you were just saying “I don’t know” and not implying some predictions yourself. You had the 20230 bet with Bryan. You had the tweet about children not living to see kindergarten. You strongly pushed back against the 2050 timelines, but as far as I know the only time you pushed back agains the very aggressive timelines was your kindergarten tweet, which still implies 2028 timelines. You are now repeatedly calling people who believed the 2050 timelines total fools, which would be an imo very unfair thing to do if AGI arrived after 2037, so I think this implies high confidence on your part that it will come before 2037.
To be clear, I think it’s fine, and often inevitable, to imply things about your timelines beliefs by e.g. what you do and don’t push back against. But I think it’s not fair to claim that you only said “I don’t know”, I think your writing was (perhaps unintentionally?) implying an implicit belief that an AI capable of destroying humanity will come with a median of 2028-2030. I think this would have been a fine prediction to make, but if AI capable of destroying humanity comes after 2037 (which I think is close to 50-50), then I think your implicit predictions will fare worse than Ajeya’s explicit predictions.