Might as well finish out this forecasting exercise...
If we assume compute follows the current trend of peak AI project compute doubling every 3.4 months, then 2.2e6× more compute would be log2(2.2e6) = 22 doublings away—or 22*(3.4/12) = 6.3 years, or 2027. (Seems a little unlikely.)
Going the other direction, Hernandez & Brown 2020′s estimate is that, net of hardware & algorithmic progress, the cost of a fixed level of performance halves every 16 months; so if GPT-3 cost ~$5m in early 2020, then it’ll cost $2.5m around mid-2021, and so on. Similarly, a GPT-human requiring 2.2e6× more compute would presumably cost on the order of $10 trillion in 2020, but after 14 halvings (18 years) would cost $1b in 2038.
Metaculus currently seems to be roughly in between 2027 and 2038 right now, incidentally.
Might as well finish out this forecasting exercise...
If we assume compute follows the current trend of peak AI project compute doubling every 3.4 months, then 2.2e6× more compute would be log2(2.2e6) = 22 doublings away—or 22*(3.4/12) = 6.3 years, or 2027. (Seems a little unlikely.)
Going the other direction, Hernandez & Brown 2020′s estimate is that, net of hardware & algorithmic progress, the cost of a fixed level of performance halves every 16 months; so if GPT-3 cost ~$5m in early 2020, then it’ll cost $2.5m around mid-2021, and so on. Similarly, a GPT-human requiring 2.2e6× more compute would presumably cost on the order of $10 trillion in 2020, but after 14 halvings (18 years) would cost $1b in 2038.
Metaculus currently seems to be roughly in between 2027 and 2038 right now, incidentally.