[Question] How to talk about reasons why AGI might not be near?

I occasionally have some thoughts about why AGI might not be as near as a lot of people seem to think, but I’m confused about how/​whether to talk about them in public.

The biggest reason for not talking about them is that one person’s “here is a list of capabilities that I think an AGI would need to have, that I don’t see there being progress on” is another person’s “here’s a roadmap of AGI capabilities that we should do focused research on”. Any articulation of missing capabilities that is clear enough to be convincing, seems also clear enough to get people thinking about how to achieve those capabilities.

At the same time, the community thinking that AGI is closer than it really is (if that’s indeed the case) has numerous costs, including at least:

  • Immense mental health costs to a huge number of people who think that AGI is imminent

  • People at large making bad strategic decisions that end up having major costs, e.g. not putting any money in savings because they expect it to not matter soon

  • Alignment people specifically making bad strategic decisions that end up having major costs, e.g. focusing on alignment approaches that one might pay off in the long term and neglecting more foundational long-term research

  • Alignment people losing credibility and getting a reputation of crying wolf once predicted AGI advances fail to materialize

Having a better model of what exactly is missing could conceivably also make it easier to predict when AGI will actually be near. But I’m not sure to what extent this is actually the case, since the development of core AGI competencies feels more of a question of insight than grind[1], and insight seems very hard to predict.

A benefit from this that does seem more plausible would be if the analysis of capabilities gave us information that we could use to figure out what a good future landscape would look like. For example, suppose that we aren’t likely to get AGI soon and that the capabilities we currently have will create a society that looks more like the one described in Comprehensive AI Services, and that such services could safely be used to detect signs of actually dangerous AGIs. If this was the case, then it would be important to know that we may want to accelerate the deployment of technologies that are taking in the world in a CAIS-like direction, and possibly e.g. promote rather than oppose things like open source LLMs.

One argument would be that if AGI really isn’t near, then that’s going to be obvious pretty soon, and it’s unlikely that my arguments in particular for this would be all that unique—someone else would be likely to make them soon anyway. But I think this argument cuts both ways—if someone else is likely to make the same arguments soon anyway, then there’s also limited benefit in writing them up. (Of course, if it saves people from significant mental anguish, even just making those arguments slightly earlier seems good, so overall this argument seems like it’s weakly in favor of writing up the arguments.)

  1. ^

    From Armstrong & Sotala (2012):

    Some AI prediction claim that AI will result from grind: i.e. lots of hard work and money. Other claim that AI will need special insights: new unexpected ideas that will blow the field wide open (Deutsch 2012).

    In general, we are quite good at predicting grind. Project managers and various leaders are often quite good at estimating the length of projects (as long as they’re not directly involved in the project (Buehler, Griffin, and Ross 1994)). Even for relatively creative work, people have sufficient feedback to hazard reasonable guesses. Publication dates for video games, for instance, though often over-optimistic, are generally not ridiculously erroneous—even though video games involve a lot of creative design, play-testing, art, programing the game “AI,” etc. Moore’s law could be taken as an ultimate example of grind: we expect the global efforts of many engineers across many fields to average out to a rather predictable exponential growth.

    Predicting insight, on the other hand, seems a much more daunting task. Take the Riemann hypothesis, a well-established mathematical hypothesis from 1885, (Riemann 1859). How would one go about estimating how long it would take to solve? How about the P = NP hypothesis in computing? Mathematicians seldom try and predict when major problems will be solved, because they recognize that insight is very hard to predict. And even if predictions could be attempted (the age of the Riemann’s hypothesis hints that it probably isn’t right on the cusp of being solved), they would need much larger error bars than grind predictions. If AI requires insights, we are also handicapped by the fact of not knowing what these insights are (unlike the Riemann hypothesis, where the hypothesis is clearly stated, and only the proof is missing). This could be mitigated somewhat if we assumed there were several different insights, each of which could separately lead to AI. But we would need good grounds to assume that.