My current framework for thinking about AGI timelines

At the be­gin­ning of 2017, some­one I deeply trusted said they thought AGI would come in 10 years, with 50% prob­a­bil­ity.

I didn’t take their opinion at face value, es­pe­cially since so many ex­perts seemed con­fi­dent that AGI was decades away. But the pos­si­bil­ity of im­mi­nent apoc­a­lypse seemed plau­si­ble enough and im­por­tant enough that I de­cided to pri­ori­tize in­ves­ti­gat­ing AGI timelines over try­ing to strike gold. I left the VC-backed startup I’d cofounded, and went around talk­ing to ev­ery smart and sen­si­ble per­son I could find who seemed to have opinions about when hu­man­ity would de­velop AGI.

My biggest take­aways af­ter 3 years might be dis­ap­point­ing—I don’t think the con­sid­er­a­tions cur­rently available to us point to any de­ci­sive con­clu­sion one way or an­other, and I don’t think any­body re­ally knows when AGI is com­ing. At the very least, the fields of knowl­edge that I think bear on AGI fore­cast­ing (in­clud­ing deep learn­ing, pre­dic­tive cod­ing, and com­par­a­tive neu­roanatomy) are dis­parate, and I don’t know of any care­ful and mea­sured thinkers with all the rele­vant ex­per­tise.

That be­ing said, I did man­age to iden­tify a hand­ful of back­ground vari­ables that con­sis­tently play sig­nifi­cant roles in in­form­ing peo­ple’s in­tu­itive es­ti­mates of when we’ll get to AGI. In other words, peo­ple would of­ten tell me that their es­ti­mates of AGI timelines would sig­nifi­cantly change if their views on one of these back­ground vari­ables changed.

I’ve put to­gether a frame­work for un­der­stand­ing AGI timelines based on these back­ground vari­ables. Among all the frame­works for AGI timelines I’ve en­coun­tered, it’s the frame­work that most com­pre­hen­sively enu­mer­ates cru­cial con­sid­er­a­tions for AGI timelines, and it’s the frame­work that best ex­plains how smart and sen­si­ble peo­ple might ar­rive at vastly differ­ent views on AGI timelines.

Over the course of the next few weeks, I’ll pub­lish a se­ries of posts about these back­ground vari­ables and some con­sid­er­a­tions that shed light on what their val­ues are. I’ll con­clude by de­scribing my frame­work for how they come to­gether to ex­plain var­i­ous over­all view­points on AGI timelines, de­pend­ing on differ­ent prior as­sump­tions on the val­ues of these vari­ables.

By trade, I’m a math com­pe­ti­tion junkie, an en­trepreneur, and a hip­pie. I am not an ex­pert on any of the top­ics I’ll be writ­ing about—my analy­ses will not be com­pre­hen­sive, and they might con­tain mis­takes. I’m shar­ing them with you any­way in the hopes that you might con­tribute your own ex­per­tise, cor­rect for my epistemic short­com­ings, and per­haps find them in­ter­est­ing.

I’d like to thank Paul Chris­ti­ano, Jes­sica Tay­lor, Carl Shul­man, Anna Sala­mon, Katja Grace, Te­gan McCaslin, Eric Drexler, Vlad Firiou, Janos Kra­mar, Vic­to­ria Krakovna, Jan Leike, Richard Ngo, Ro­hin Shah, Ja­cob Stein­hardt, David Dalrym­ple, Cather­ine Ols­son, Je­lena Luketina, Alex Ray, Jack Gal­lagher, Ben Hoff­man, Tsvi BT, Sam Eisen­stat, Matthew Graves, Ryan Carey, Gary Basin, Eli­ana Lorch, Anand Srini­vasan, Michael Webb, Ash­win Sah, Yi Sun, Mark Sel­lke, Alex Gun­ning, Paul Kreiner, David Girardo, Danit Gal, Oliver Habryka, Sarah Con­stantin, Alex Flint, Stag Lynn, Andis Dra­guns, Tris­tan Hume, Holden Lee, David Do­han, and Daniel Kang for en­light­en­ing con­ver­sa­tions about AGI timelines, and I’d like to apol­o­gize to any­one whose name I ought to have in­cluded, but for­got to in­clude.

Table of contents

As I post over the com­ing weeks, I’ll up­date this table of con­tents with links to the posts, and I might up­date some of the ti­tles and de­scrip­tions.

How spe­cial are hu­man brains among an­i­mal brains?

Hu­mans can perform in­tel­lec­tual feats that ap­pear qual­i­ta­tively differ­ent from those of other an­i­mals, but are our brains re­ally do­ing any­thing so differ­ent?

How uniform is the neo­cor­tex?

To what ex­tent is the part of our brain re­spon­si­ble for higher-or­der func­tions like sen­sory per­cep­tion, cog­ni­tion, and lan­guage[1], uniformly com­posed of gen­eral-pur­pose data-pro­cess­ing mod­ules?

How much are our in­nate cog­ni­tive ca­pac­i­ties just short­cuts for learn­ing?

To what ex­tent are our in­nate cog­ni­tive ca­pac­i­ties (for ex­am­ple, a pre-wired abil­ity to learn lan­guage) crutches pro­vided by evolu­tion to help us learn more quickly what we oth­er­wise would have been able to learn any­way?

Are mam­malian brains all do­ing the same thing at differ­ent lev­els of scale?

Are the brains of smarter mam­mals, like hu­mans, do­ing es­sen­tially the same things as the brains of less in­tel­li­gent mam­mals, like mice, ex­cept at a larger scale?

How sim­ple is the sim­plest brain that can be scaled?

If mam­malian brains can be scaled, what’s the sim­plest brain that could? A tur­tle’s? A spi­der’s?

How close are we to sim­ple biolog­i­cal brains?

Given how lit­tle we un­der­stand about how brains work, do we have any rea­son to think we can re­ca­pitu­late the al­gorith­mic func­tion of even sim­ple biolog­i­cal brains?

What’s the small­est set of prin­ci­ples that can ex­plain hu­man cog­ni­tion?

Is there a small set of prin­ci­ples that un­der­lies the breadth of cog­ni­tive pro­cesses we’ve ob­served (e.g. lan­guage, per­cep­tion, mem­ory, at­ten­tion, and rea­son­ing)[2], similarly to how New­ton’s laws of mo­tion un­der­lie a breadth of seem­ingly-dis­parate phys­i­cal phe­nom­ena? Or is our cog­ni­tion more like a big mess of ir­re­ducible com­plex­ity?

How well can hu­mans com­pete against evolu­tion in de­sign­ing gen­eral in­tel­li­gences?

Hu­mans can de­sign some things much bet­ter than evolu­tion (like rock­ets), and evolu­tion can de­sign some things much bet­ter than hu­mans (like im­mune sys­tems). Where does gen­eral in­tel­li­gence lie on this spec­trum?

Ty­ing it all to­gether, part I

My frame­work for what these vari­ables tell us about AGI timelines

Ty­ing it all to­gether, part II

My per­sonal views on AGI timelines


  1. https://​​en.wikipe­dia.org/​​wiki/​​Neo­cor­tex ↩︎

  2. https://​​en.wikipe­dia.org/​​wiki/​​Cog­ni­tive_sci­ence ↩︎