Here “doctrine” is an applause light; boo, doctrines. I wrote a report, you posted your timeline, they have a doctrine.
All involved, including Yudkowsky, understand that 2050 was a median estimate, not a point estimate. Yudkowsky wrote that it has “very wide credible intervals around both sides”. Looking at (FLOP to train a transformative model is affordable by), I’d summarize it as:
A 50% chance that it will be affordable by 2053, rising from 10% by 2032 to 78% by 2100. The most likely years are 2038-2045, which are >2% each.
A comparison: a 52yo US female in 1990 had a median life expectance of ~30 more years, living to 2020. 5% of such women died on or before age 67 (2005). Would anyone describe these life expectancy numbers to a 52yo woman in 1990 as the “Aetna doctrine of death in 2020”?
Yudkowsky seems confused about OpenPhil’s exact past position. Relevant links:
Draft report on AI Timelines—Cotra 2020-09-18
Biology-Inspired Timelines—The Trick that Never Works—Yudkowsky 2021-12-01
Reply to Eliezer on Biological Anchors—Harnofsky 2021-12-23
Here “doctrine” is an applause light; boo, doctrines. I wrote a report, you posted your timeline, they have a doctrine.
All involved, including Yudkowsky, understand that 2050 was a median estimate, not a point estimate. Yudkowsky wrote that it has “very wide credible intervals around both sides”. Looking at (FLOP to train a transformative model is affordable by), I’d summarize it as:
A comparison: a 52yo US female in 1990 had a median life expectance of ~30 more years, living to 2020. 5% of such women died on or before age 67 (2005). Would anyone describe these life expectancy numbers to a 52yo woman in 1990 as the “Aetna doctrine of death in 2020”?