(Obviously I’m biased here by being friends with Ajeya.) This is only tangentially related to the main point of the post, but I think you’re really overstating how many Bayes points you get against Ajeya’s timelines report. Ajeya gave 15% to AGI before 2036, with little of that in the first few years after her report; maybe she’d have said 10% between 2025 and 2036.
I don’t think you’ve ever made concrete predictions publicly (which makes me think it’s worse behavior for you to criticize people for their predictions), but I don’t think there are that many groups who would have put wildly higher probability on AGI in this particular time period. (I think some of the short-timelines people at the time put substantial mass on AGI arriving by now, which reduces their performance.) Maybe some of them would have said 40%? If we assume AGI by then, that’s a couple bits of better performance, but I don’t think it’s massive outperformance. (And I still think it’s plausible that AGI isn’t developed by 2036!)
In general, I think that disagreements on AI timelines often seem more extreme when you summarize people’s timelines by median timeline rather than by their probability on AGI by a particular time.
Here “doctrine” is an applause light; boo, doctrines. I wrote a report, you posted your timeline, they have a doctrine.
All involved, including Yudkowsky, understand that 2050 was a median estimate, not a point estimate. Yudkowsky wrote that it has “very wide credible intervals around both sides”. Looking at (FLOP to train a transformative model is affordable by), I’d summarize it as:
A 50% chance that it will be affordable by 2053, rising from 10% by 2032 to 78% by 2100. The most likely years are 2038-2045, which are >2% each.
A comparison: a 52yo US female in 1990 had a median life expectance of ~30 more years, living to 2020. 5% of such women died on or before age 67 (2005). Would anyone describe these life expectancy numbers to a 52yo woman in 1990 as the “Aetna doctrine of death in 2020”?
My timelines are now roughly similar on the object level (maybe a year slower for 25th and 1-2 years slower for 50th)
This means 25th percentile for 2028 and 50th percentile for 2031-2.
The original 2020 model assigns 5.23% by 2028 and 9.13% | 10.64% by 2031 | 2032 respectively. Each time a factor of ~5x.
However, the original model predicted the date by which it was affordable to train a transformative AI model. This is a leading a variable on such a model actually being built and trained, pushing back the date by some further number of years, so view the 5x as bounding, not pinpointing, the AI timelines update Cotra has made.
Note that the capability milestone forecasted in the linked short form is substantially weaker than the notion of transformative AI in the 2020 model. (It was defined as AI with an effect at least as large as the industrial revolution.)
I don’t expect this adds many years, for me it adds like ~2 years to my median.
(Note that my median for time from 10x to this milestone is lower than 2 years, but median to Y isn’t equal to median to X + median from X to Y.)
Ajeya gave 15% to AGI before 2036, with little of that in the first few years after her report; maybe she’d have said 10% between 2025 and 2036.
Just because I was curious, here is the most relevant chart from the report:
This is not a direct probability estimate (since it’s about probability of affordability), but it’s probably within a factor of 2. Looks like the estimate by 2030 was 7.72% and the estimate by 2036 is 17.36%.
There’s no consensus among MIRI researchers on how long timelines are, and our aggregated estimate puts medium-to-high probability on scenarios in which the research community hasn’t developed AGI by, e.g., 2035. On average, however, research staff now assign moderately higher probability to AGI’s being developed before 2035 than we did a year or two ago.
I don’t think the individual estimates that made up the aggregate were ever published. Perhaps someone at MIRI can help us out, it would help build a forecasting track record for those involved.
For Yudkowsky in particular, I have a small collection of sources to hand. In Biology-Inspired AGI Timelines (2021-12-01), he wrote:
But I suppose I cannot but acknowledge that my outward behavior seems to reveal a distribution whose median seems to fall well before 2050.
I could be wrong, but my guess is that we do not get AGI just by scaling ChatGPT, and that it takes surprisingly long from here. Parents conceiving today may have a fair chance of their child living to see kindergarten.
When the insider conversation is about the grief of seeing your daughter lose her first tooth, and thinking she’s not going to get a chance to grow up, I believe we are past the point of playing political chess about a six-month moratorium.
Yudkowsky also has a track record betting on Manifold that AI will wipe out humanity by 2030, at up to 40%.
Putting these together:
2021: median well before 2050
2022: “fair chance” when a 2023 baby goes to kindergarten (Sep 2028 or 2029)
2023: before a young child grows up (about 2035)
40% P(Doom by 2030)
So a median of 2029, with very wide credible intervals around both sides. This is just an estimate based on his outward behavior.
Would Yudkowsky describe this as “Yudkowsky’s doctrine of AGI in 2029”?
(Obviously I’m biased here by being friends with Ajeya.) This is only tangentially related to the main point of the post, but I think you’re really overstating how many Bayes points you get against Ajeya’s timelines report. Ajeya gave 15% to AGI before 2036, with little of that in the first few years after her report; maybe she’d have said 10% between 2025 and 2036.
I don’t think you’ve ever made concrete predictions publicly (which makes me think it’s worse behavior for you to criticize people for their predictions), but I don’t think there are that many groups who would have put wildly higher probability on AGI in this particular time period. (I think some of the short-timelines people at the time put substantial mass on AGI arriving by now, which reduces their performance.) Maybe some of them would have said 40%? If we assume AGI by then, that’s a couple bits of better performance, but I don’t think it’s massive outperformance. (And I still think it’s plausible that AGI isn’t developed by 2036!)
In general, I think that disagreements on AI timelines often seem more extreme when you summarize people’s timelines by median timeline rather than by their probability on AGI by a particular time.
Yudkowsky seems confused about OpenPhil’s exact past position. Relevant links:
Draft report on AI Timelines—Cotra 2020-09-18
Biology-Inspired Timelines—The Trick that Never Works—Yudkowsky 2021-12-01
Reply to Eliezer on Biological Anchors—Harnofsky 2021-12-23
Here “doctrine” is an applause light; boo, doctrines. I wrote a report, you posted your timeline, they have a doctrine.
All involved, including Yudkowsky, understand that 2050 was a median estimate, not a point estimate. Yudkowsky wrote that it has “very wide credible intervals around both sides”. Looking at (FLOP to train a transformative model is affordable by), I’d summarize it as:
A comparison: a 52yo US female in 1990 had a median life expectance of ~30 more years, living to 2020. 5% of such women died on or before age 67 (2005). Would anyone describe these life expectancy numbers to a 52yo woman in 1990 as the “Aetna doctrine of death in 2020”?
Further detail on this: Cotra has more recently updated at least 5x against her original 2020 model in the direction of faster timelines.
Greenblatt writes:
Cotra replies:
This means 25th percentile for 2028 and 50th percentile for 2031-2.
The original 2020 model assigns 5.23% by 2028 and 9.13% | 10.64% by 2031 | 2032 respectively. Each time a factor of ~5x.
However, the original model predicted the date by which it was affordable to train a transformative AI model. This is a leading a variable on such a model actually being built and trained, pushing back the date by some further number of years, so view the 5x as bounding, not pinpointing, the AI timelines update Cotra has made.
Note that the capability milestone forecasted in the linked short form is substantially weaker than the notion of transformative AI in the 2020 model. (It was defined as AI with an effect at least as large as the industrial revolution.)
I don’t expect this adds many years, for me it adds like ~2 years to my median.
(Note that my median for time from 10x to this milestone is lower than 2 years, but median to Y isn’t equal to median to X + median from X to Y.)
Just because I was curious, here is the most relevant chart from the report:
This is not a direct probability estimate (since it’s about probability of affordability), but it’s probably within a factor of 2. Looks like the estimate by 2030 was 7.72% and the estimate by 2036 is 17.36%.
Thanks. (Also note that the model isn’t the same as her overall beliefs at the time, though they were similar at the 15th and 50th percentiles.)
There was a specific bet, which Yudkowsky is likely about to win. https://www.lesswrong.com/posts/sWLLdG6DWJEy3CH7n/imo-challenge-bet-with-eliezer
The IMO Challenge Bet was on a related topic, but not directly comparable to Bio Anchors. From MIRI’s 2017 Updates and Strategy:
I don’t think the individual estimates that made up the aggregate were ever published. Perhaps someone at MIRI can help us out, it would help build a forecasting track record for those involved.
For Yudkowsky in particular, I have a small collection of sources to hand. In Biology-Inspired AGI Timelines (2021-12-01), he wrote:
On Twitter (2022-12-02):
Also, in Shut it all down (March 2023):
Yudkowsky also has a track record betting on Manifold that AI will wipe out humanity by 2030, at up to 40%.
Putting these together:
2021: median well before 2050
2022: “fair chance” when a 2023 baby goes to kindergarten (Sep 2028 or 2029)
2023: before a young child grows up (about 2035)
40% P(Doom by 2030)
So a median of 2029, with very wide credible intervals around both sides. This is just an estimate based on his outward behavior.
Would Yudkowsky describe this as “Yudkowsky’s doctrine of AGI in 2029”?
Paul is not Ajeya, and also Eliezer only gets one bit from this win, which I think is insufficient grounds for behaving like such an asshole.
Buying at 12% and selling at 84% gets you 2.8 bits.
Edit: Hmm, that’s if he stakes all his cred, by Kelly he only stakes some of it so you’re right, it probably comes out to about 1 bit.