I’ll completely grant that this is only a 1st order approximation, and that for most (all?) technical work, getting more specific timelines matters a lot. I wanted to make this point because I see many laypeople not quite buying the “AGI in X years” timelines because X years seems like a short time (for most values of X, for most laypeople), but the moment I switch the phrasing to “computers are ~40 years old, given this rate of progress we’ll almost certainly have AGI in your lifetime” then they become convinced that AI safety is a problem worth worrying about.
Got it. I see the value, and I’ll do likewise. There is another step of saying “and we need to get moving on it now” but doing the first step of “this is the most important thing in your world” is a good start.
I’ll completely grant that this is only a 1st order approximation, and that for most (all?) technical work, getting more specific timelines matters a lot. I wanted to make this point because I see many laypeople not quite buying the “AGI in X years” timelines because X years seems like a short time (for most values of X, for most laypeople), but the moment I switch the phrasing to “computers are ~40 years old, given this rate of progress we’ll almost certainly have AGI in your lifetime” then they become convinced that AI safety is a problem worth worrying about.
Got it. I see the value, and I’ll do likewise. There is another step of saying “and we need to get moving on it now” but doing the first step of “this is the most important thing in your world” is a good start.