If I completely ignore the hype, I don’t know of an argument that convinces me AGI is very unlikely before 2030. The fact that there’s hype and strong incentives for the hype isn’t evidence that the balance of the non-hype technical knowledge about LLMs says any particular thing. This is not a valid argument, even if the conclusion is correct. The whole (a)-(g) sequence of events is even more tenuous.
Also, I’m not sure why timelines should matter (outside of long sequences of fortunate events), since AGI will be arriving at some point in any case, and it’s more robust for a Pause to occur earlier (a global treaty that treats AI datacenters as uranium enrichment plants, only worse because the blast radius is the whole world). It’s more difficult and inconvenient for everyone to make sure AGI isn’t created unilaterally if a Pause starts closer to AGI in available hardware and results of computation-heavy experiments.
If I completely ignore the hype, I don’t know of an argument that convinces me AGI is very unlikely before 2030. The fact that there’s hype and strong incentives for the hype isn’t evidence that the balance of the non-hype technical knowledge about LLMs says any particular thing. This is not a valid argument, even if the conclusion is correct. The whole (a)-(g) sequence of events is even more tenuous.
Also, I’m not sure why timelines should matter (outside of long sequences of fortunate events), since AGI will be arriving at some point in any case, and it’s more robust for a Pause to occur earlier (a global treaty that treats AI datacenters as uranium enrichment plants, only worse because the blast radius is the whole world). It’s more difficult and inconvenient for everyone to make sure AGI isn’t created unilaterally if a Pause starts closer to AGI in available hardware and results of computation-heavy experiments.