Note: I’m writing every day in November, see my blog for disclaimers.
In the discussion of AI safety and the existential risk that ASI poses to humanity, I think timelines aren’t the right framing. Or at least, they often distract from the critical point: It doesn’t matter if ASI arrives in 5 years time or in 20 years time, it only matters that it arrives during your lifetime[1]. The risks due to ASI are completely independent of whether they arrive during this hype-cycle of AI, or whether there’s another AI winter, progress stalls for 10 years, but then ASI is built after that winter has passed. If you are convinced that ASI is a catastrophic global risk to humanity, the timelines don’t matter and are somewhat inconsequential, the only thing that matters is 1. we have no idea how we could make something smarter than ourselves without it also being an existential threat, and 2. we can start making progress on this field of research today.
So ultimately, I’m uncertain about whether we’re getting AI in 2 years or 20 or 40. But it seems almost certain that we’ll be able to build ASI within my lifetime[2]. And if that’s the case, nothing else really matters besides making sure that humanity equally realises the benefits of ASI without it also killing us all due to our short-sighted greed.
I think this is right at the broad level. But once you’ve accepted that getting a good outcome from AGI is the most important thing to work on, timelines matter a lot again, because they determine what the most effective direction to work is. Figuring out exactly what AGI we have to align and how long we have to do it is pretty crucial for having the best possible alignment work done by the time we hit takeover capable AGI.
I’ll completely grant that this is only a 1st order approximation, and that for most (all?) technical work, getting more specific timelines matters a lot. I wanted to make this point because I see many laypeople not quite buying the “AGI in X years” timelines because X years seems like a short time (for most values of X, for most laypeople), but the moment I switch the phrasing to “computers are ~40 years old, given this rate of progress we’ll almost certainly have AGI in your lifetime” then they become convinced that AI safety is a problem worth worrying about.
Got it. I see the value, and I’ll do likewise. There is another step of saying “and we need to get moving on it now” but doing the first step of “this is the most important thing in your world” is a good start.
If it includes all humans then every passing second is too late (present mortality is more than one human per second, so a potential cure/rejuvenation and such is always too late for someone).
But also, a typical person’s “circle of immediate care” tends to include some old people, and even for young people it is a probabilistic game, some young people will learn their fatal diagnoses today.
So, no, the delays are not free. We have more than a million human deaths per week.
If, for example, you are 20 and talking about the next 40 years, well, more than 1% of 60 year old males would die within one year. The chance for a 20 year old dying before 60 is about 9% for females and about 15% for males. What do you mean by “almost certain”?