While I do spend some time discussing AGI timelines (and I’ve written somepostsabout it recently), I don’t think moderate quantitative differences in AGI timelines matter that much for deciding what to do[1]. For instance, having a 15-year median rather than a 6-year median doesn’t make that big of a difference. That said, I do think that moderate differences in the chance of very short timelines (i.e., less than 3 years) matter more: going from a 20% chance to a 50% chance of full AI R&D automation within 3 years should potentially make a substantial difference to strategy.[2]
Additionally, my guess is that the most productive way to engage with discussion around timelines is mostly to not care much about resolving disagreements, but then when there appears to be a large chance that timelines are very short (e.g., >25% in <2 years) it’s worthwhile to try hard to argue for this.[3] I think takeoff speeds are much more important to argue about when making the case for AI risk.
I do think that having somewhat precise views is helpful for some people in doing relatively precise prioritization within people already working on safety, but this seems pretty niche.
Given that I don’t think timelines are that important, why have I been writing about this topic? This is due to a mixture of: I find it relatively quick and easy to write about timelines, my commentary is relevant to the probability of very short timelines (which I do think is important as discussed above), a bunch of people seem interested in timelines regardless, and I do think timelines matter some.
Consider reflecting on whether you’re overly fixated on details of timelines.
I’ve seen Richard Ngo make this point before, though I couldn’t find where he did this. More generally, this isn’t a very original point; I just think it’s worth making given that I’ve been talking about timelines recently.
You could have views such that you expect to never be >25% confident in <2-year timelines until it’s basically too late. For instance, maybe you expect very fast takeoff driven by a single large algorithmic advance. Under this view, I think arguing about the details of timelines looks even less good and you should mostly make the case for risk independently of this, perhaps arguing “it seems like AI could emerge quickly and unexpectedly, so we need to act now”.
I think most of the value in researching timelines is in developing models that can then be quickly updated as new facts come to light. As opposed to figuring out how to think about the implications of such facts only after they become available.
People might substantially disagree about parameters of such models (and the timelines they predict) while agreeing on the overall framework, and building common understanding is important for coordination. Also, you wouldn’t necessarily a priori know which facts to track, without first having developed the models.
For people who are comparatively advantaged at this, it seems good to try to make the case for this in a variety of different ways. One place to start is to try to convince relatively soft target audiences like me (who’s sympathetic but disagrees) by e.g. posting on LW and then go somewhere from here.
I think it’s a rough task, but ultimately worth trying.
Personally it will be impossible for me to ignore the part of me that wonders “is this AGI/ASI stuff actually, for real, coming, or will it turn out to be fake.” Studying median timelines bleeds into the question of whether AGI by my natural lifespan is 90% likely or 99.5% likely, and vice versa. So I will continue thinking very carefully about evidence of AGI progress.
Absence of AGI[1] by (say) 2055 is predicted by models that deserve to be developed in earnest (I’d currently give the claim 15%, with 10% mostly for technological reasons and 5% mostly because of a human-instituted lasting Pause or a disaster). This doesn’t significantly affect the median timeline yet, but as time goes on these models can get stronger (Moore’s law even in price-performance form breaking down, continual learning turning out to be a grand algorithmic obstruction that might take decades to solve, with in-context learning not good enough for this purpose within available compute). And this would start affecting the median timeline more and more. Also, development of AGI might result in a lasting ASI[2] Pause (either through societal backlash or from AGIs themselves insisting on this to prevent ASIs misaligned with them before they figure out how to align ASIs).
AGIs are AIs unbounded in ability to develop civilization on their own, without needing substantial human input, including by inventing aligned-with-them ASIs.
ASIs are qualitatively more intelligent than humans or humanity, while non-ASI AGIs are reasonably comparable to humans or humanity, even if notably more capable.
Precise AGI timelines don’t matter that much.
While I do spend some time discussing AGI timelines (and I’ve written some posts about it recently), I don’t think moderate quantitative differences in AGI timelines matter that much for deciding what to do[1]. For instance, having a 15-year median rather than a 6-year median doesn’t make that big of a difference. That said, I do think that moderate differences in the chance of very short timelines (i.e., less than 3 years) matter more: going from a 20% chance to a 50% chance of full AI R&D automation within 3 years should potentially make a substantial difference to strategy.[2]
Additionally, my guess is that the most productive way to engage with discussion around timelines is mostly to not care much about resolving disagreements, but then when there appears to be a large chance that timelines are very short (e.g., >25% in <2 years) it’s worthwhile to try hard to argue for this.[3] I think takeoff speeds are much more important to argue about when making the case for AI risk.
I do think that having somewhat precise views is helpful for some people in doing relatively precise prioritization within people already working on safety, but this seems pretty niche.
Given that I don’t think timelines are that important, why have I been writing about this topic? This is due to a mixture of: I find it relatively quick and easy to write about timelines, my commentary is relevant to the probability of very short timelines (which I do think is important as discussed above), a bunch of people seem interested in timelines regardless, and I do think timelines matter some.
Consider reflecting on whether you’re overly fixated on details of timelines.
I’ve seen Richard Ngo make this point before, though I couldn’t find where he did this. More generally, this isn’t a very original point; I just think it’s worth making given that I’ve been talking about timelines recently.
I also think that the chance that very powerful AI happens under this presidential administration is action-relevant for policy.
You could have views such that you expect to never be >25% confident in <2-year timelines until it’s basically too late. For instance, maybe you expect very fast takeoff driven by a single large algorithmic advance. Under this view, I think arguing about the details of timelines looks even less good and you should mostly make the case for risk independently of this, perhaps arguing “it seems like AI could emerge quickly and unexpectedly, so we need to act now”.
I think most of the value in researching timelines is in developing models that can then be quickly updated as new facts come to light. As opposed to figuring out how to think about the implications of such facts only after they become available.
People might substantially disagree about parameters of such models (and the timelines they predict) while agreeing on the overall framework, and building common understanding is important for coordination. Also, you wouldn’t necessarily a priori know which facts to track, without first having developed the models.
i super agree, i al so think that the value is in debating the models of intelligence explosion.
which is why i made my website: ai-2028.com or intexp.xyz
It seems like a bad sign that, even with maximally optimistic inputs, your model never falsely retrodicts intelligence explosions in the past.
For those of us who do favor “very short timelines”, any thoughts?
For people who are comparatively advantaged at this, it seems good to try to make the case for this in a variety of different ways. One place to start is to try to convince relatively soft target audiences like me (who’s sympathetic but disagrees) by e.g. posting on LW and then go somewhere from here.
I think it’s a rough task, but ultimately worth trying.
Personally it will be impossible for me to ignore the part of me that wonders “is this AGI/ASI stuff actually, for real, coming, or will it turn out to be fake.” Studying median timelines bleeds into the question of whether AGI by my natural lifespan is 90% likely or 99.5% likely, and vice versa. So I will continue thinking very carefully about evidence of AGI progress.
Absence of AGI[1] by (say) 2055 is predicted by models that deserve to be developed in earnest (I’d currently give the claim 15%, with 10% mostly for technological reasons and 5% mostly because of a human-instituted lasting Pause or a disaster). This doesn’t significantly affect the median timeline yet, but as time goes on these models can get stronger (Moore’s law even in price-performance form breaking down, continual learning turning out to be a grand algorithmic obstruction that might take decades to solve, with in-context learning not good enough for this purpose within available compute). And this would start affecting the median timeline more and more. Also, development of AGI might result in a lasting ASI[2] Pause (either through societal backlash or from AGIs themselves insisting on this to prevent ASIs misaligned with them before they figure out how to align ASIs).
AGIs are AIs unbounded in ability to develop civilization on their own, without needing substantial human input, including by inventing aligned-with-them ASIs.
ASIs are qualitatively more intelligent than humans or humanity, while non-ASI AGIs are reasonably comparable to humans or humanity, even if notably more capable.
This is only somewhat related to what you were saying, but I do think 100 year medians vs 10 year medians does matter a bunch.