I think this is right at the broad level. But once you’ve accepted that getting a good outcome from AGI is the most important thing to work on, timelines matter a lot again, because they determine what the most effective direction to work is. Figuring out exactly what AGI we have to align and how long we have to do it is pretty crucial for having the best possible alignment work done by the time we hit takeover capable AGI.
I’ll completely grant that this is only a 1st order approximation, and that for most (all?) technical work, getting more specific timelines matters a lot. I wanted to make this point because I see many laypeople not quite buying the “AGI in X years” timelines because X years seems like a short time (for most values of X, for most laypeople), but the moment I switch the phrasing to “computers are ~40 years old, given this rate of progress we’ll almost certainly have AGI in your lifetime” then they become convinced that AI safety is a problem worth worrying about.
Got it. I see the value, and I’ll do likewise. There is another step of saying “and we need to get moving on it now” but doing the first step of “this is the most important thing in your world” is a good start.
I think this is right at the broad level. But once you’ve accepted that getting a good outcome from AGI is the most important thing to work on, timelines matter a lot again, because they determine what the most effective direction to work is. Figuring out exactly what AGI we have to align and how long we have to do it is pretty crucial for having the best possible alignment work done by the time we hit takeover capable AGI.
I’ll completely grant that this is only a 1st order approximation, and that for most (all?) technical work, getting more specific timelines matters a lot. I wanted to make this point because I see many laypeople not quite buying the “AGI in X years” timelines because X years seems like a short time (for most values of X, for most laypeople), but the moment I switch the phrasing to “computers are ~40 years old, given this rate of progress we’ll almost certainly have AGI in your lifetime” then they become convinced that AI safety is a problem worth worrying about.
Got it. I see the value, and I’ll do likewise. There is another step of saying “and we need to get moving on it now” but doing the first step of “this is the most important thing in your world” is a good start.