We should expect a significant chance of very short (2-5 year) timelines because we don’t have good estimates of timelines.
We are estimating an ETA by having good estimates of our position and velocity, but not a well-known destination.
A good estimate of the end point for timelines would require a good gears-level models of AGI. We don’t have that.
The rational thing to do is admit that we have very broad uncertainties, and make plans for different possible timelines. I fear we’re mostly just hoping tinmelines aren’t really short.
This argument is separate from and I think stronger than my other answer with specific reasons to find short timelines plausible. Short timelines are plausible as a baseline. We’d all probably agree that LLMs are doing a lot of what humans do (if you don’t, see my answer here and in slightly different terms in my response to Thane Ruthenis’ more recent Bear Case for AI Progress. - my point is that “most of what humans do” is highly debatable. And we should not be reasoning with point estimates.
no 2. is much more important than academic ML researchers which is the majority of the surveys done. When someone delivers a product and is the only one building it and they tells you X, you should belive X unless there is a super strong argument for the contrary and there just isn’t.
We should expect a significant chance of very short (2-5 year) timelines because we don’t have good estimates of timelines.
We are estimating an ETA by having good estimates of our position and velocity, but not a well-known destination.
A good estimate of the end point for timelines would require a good gears-level models of AGI. We don’t have that.
The rational thing to do is admit that we have very broad uncertainties, and make plans for different possible timelines. I fear we’re mostly just hoping tinmelines aren’t really short.
This argument is separate from and I think stronger than my other answer with specific reasons to find short timelines plausible. Short timelines are plausible as a baseline. We’d all probably agree that LLMs are doing a lot of what humans do (if you don’t, see my answer here and in slightly different terms in my response to Thane Ruthenis’ more recent Bear Case for AI Progress. - my point is that “most of what humans do” is highly debatable. And we should not be reasoning with point estimates.
no 2. is much more important than academic ML researchers which is the majority of the surveys done. When someone delivers a product and is the only one building it and they tells you X, you should belive X unless there is a super strong argument for the contrary and there just isn’t.