Though Ege Erdil has demonstrated that it is possible to construct a positive case for longer timelines, I think your request is shifting the burden of proof a bit. Of course it’s easier to make a lot of nice plots of benchmark performance / compute etc., and it’s harder to show a convincing plot that proves we don’t get AGI soon. The graph of the number of conceptual insights by LLMs seems to be a flat line at zero, but it would just feel silly to put that in a post. There have been many years of AI progress without reaching AGI, and that’s the default projection for the next few years. The role for a skeptic of very short timelines is to explain why the positive arguments for them don’t work.
You start by saying the post shifted burden of proof but conclude by asserting the burden should fall on short timelines because on average things don’t happen. This doesn’t seem logically valid. Weak arguments for short timelines don’t mean we can expect long timelines if arguments for them are weak too. Which they seem to be. We probably all agree that AGI is going to happen; the question is when?
If you just mean that two years seems unlikely in the absence of strong arguments, sure. But three years and up seems quite plausible.
Arguments are weak on all sides. This leads me to think that we simply don’t know. In that case, we had better be prepared for all scenarios.
Actually, I think that it is valid for the burden to fall on sort timelines because “on average things don’t happen.” Mainly because you can make the reference class more specific and the statement still holds—as I said, we have been trying to develop AGI for a long time (and there have been at least a couple of occasions when we drastically overestimated how soon it would arrive). 2-3 years is a very short time, which means it is a very strong claim.
Burden of proof should follow value of information, not plausibility. In particular, the most profitable thing to pursue is arguments you understand less about, which is often arguments in favor of things you disbelieve (since you’d be already familiar with arguments that’ve previously convinced you). So if someone wants to convince you of something you already believe, then the burden of proof is on them, but not if they want to convince you of something you disbelieve and didn’t get around to investigating yet.
Though Ege Erdil has demonstrated that it is possible to construct a positive case for longer timelines, I think your request is shifting the burden of proof a bit. Of course it’s easier to make a lot of nice plots of benchmark performance / compute etc., and it’s harder to show a convincing plot that proves we don’t get AGI soon. The graph of the number of conceptual insights by LLMs seems to be a flat line at zero, but it would just feel silly to put that in a post. There have been many years of AI progress without reaching AGI, and that’s the default projection for the next few years. The role for a skeptic of very short timelines is to explain why the positive arguments for them don’t work.
You start by saying the post shifted burden of proof but conclude by asserting the burden should fall on short timelines because on average things don’t happen. This doesn’t seem logically valid. Weak arguments for short timelines don’t mean we can expect long timelines if arguments for them are weak too. Which they seem to be. We probably all agree that AGI is going to happen; the question is when?
If you just mean that two years seems unlikely in the absence of strong arguments, sure. But three years and up seems quite plausible.
Arguments are weak on all sides. This leads me to think that we simply don’t know. In that case, we had better be prepared for all scenarios.
Actually, I think that it is valid for the burden to fall on sort timelines because “on average things don’t happen.” Mainly because you can make the reference class more specific and the statement still holds—as I said, we have been trying to develop AGI for a long time (and there have been at least a couple of occasions when we drastically overestimated how soon it would arrive). 2-3 years is a very short time, which means it is a very strong claim.
Burden of proof should follow value of information, not plausibility. In particular, the most profitable thing to pursue is arguments you understand less about, which is often arguments in favor of things you disbelieve (since you’d be already familiar with arguments that’ve previously convinced you). So if someone wants to convince you of something you already believe, then the burden of proof is on them, but not if they want to convince you of something you disbelieve and didn’t get around to investigating yet.