The First Circle

Epistemic Sta­tus: Squared

The fol­low­ing took place at an un­con­fer­ence about AGI in Fe­bru­ary 2017, un­der Chatham House rules, so I won’t be us­ing any iden­ti­fiers for the peo­ple in­volved.

Some­thing that mat­tered was hap­pen­ing.

I had just ar­rived in San Fran­cisco from New York. I was hav­ing a con­ver­sa­tion. It wasn’t a bad con­ver­sa­tion, but it wasn’t im­por­tant.

To my right on the floor was a cir­cle of four peo­ple. They were hav­ing an im­por­tant con­ver­sa­tion. A real con­ver­sa­tion. A sa­cred con­ver­sa­tion.

This was im­pos­si­ble to miss. Peo­ple I deeply re­spected spoke truth. Their truth. Im­por­tant truth, about the most im­por­tant ques­tion – whether and when AGI would be de­vel­oped, and what we could do about that to change that date, or to en­sure a good out­come. Pri­mar­ily about the timeline. And that truth was quite an up­date from my an­swer, and from my model of what their an­swers would be.

I knew we were, by de­fault, quite doomed. But not doomed so quickly!

They were un­mis­tak­ably freak­ing out. Not in a su­perfi­cial way. In a deep, calm, we are all doomed and we don’t know what to do about it kind of way. This freaked me out too.

They talked in a differ­ent way. De­liber­ate, care­ful. Words cho­sen care­fully.

I did not know the gen­er­a­tion mechanism. I did know that to dis­turb the go­ings on would be pro­fane. So I sat on a nearby couch and listed for about an hour. I said noth­ing.

At that time, de­ci­sion was made to move to an­other room. I fol­lowed, and dur­ing the walk was in­vited to join. Two par­ti­ci­pants left, two stayed, and I joined.

The space re­mained sa­cred. I knew it had differ­ent rules, and did my best to fol­low them. When I was in­cor­rect, they ex­plained. Use ‘I’ state­ments. Things about your own be­liefs, your mod­els, your feel­ings. Things you know to be true. Pay at­ten­tion to your body, and how it is feel­ing, where things come from, what they are like. Re­port it. At one point one par­ti­ci­pant said they were freak­ing out. I ob­served I was freak­ing out. Some­one else said they were not freak­ing out. I said I thought they were. The first re­as­sured me they thought there was some pos­si­bil­ity we’d sur­vive. Based on their prior state­ments, that was an up­date. It helped a lit­tle.

I left ex­hausted by the dis­cus­sion, the late hour and the three hour time zone shift, and slept on it. Was this peo­ple just now wak­ing up, per­haps not even fully? Or were peo­ple re­act­ing too much to Alpha Go? Was this a west coast zeit­geist run amok? An in­for­ma­tion cas­cade?

Was this be­cause peo­ple who un­der­stood that there was Im­pend­ing An­cient Doom and we re­ally should be freak­ing out about it used to freak­ing out vastly more than ev­ery­one else? So when Elon Musk freaked out and put a billion dol­lars into OpenAI with­out think­ing it through, and other promi­nent peo­ple freaked out, they in­stinc­tively kept their rel­a­tive freak out level a con­stant amount higher than the pub­lic’s freak­out level, re­sult­ing in an over­shoot?

Was this ac­tu­ally be­cause peo­ple were freak­ing out about Don­ald Trump giv­ing them a sense we were doomed and find­ing a way to ex­press that?

Most im­por­tantly, what about the mod­els and logic? Did they make sense? The rest of the un­con­fer­ence con­tained many con­ver­sa­tions on the same topic, and many other top­ics. There was an amaz­ing AI timeline dou­ble crux, teach­ing me both how to dou­ble crux and much about timelines and AI de­vel­op­ment. But noth­ing else felt like that first cir­cle.

As sev­eral days went by, and all the data points came to­gether, I got a bet­ter un­der­stand­ing of both how much peo­ple had up­dated, and why peo­ple had up­dated. I stopped freak­ing out. Yes, the events of the pre­vi­ous year had been freak­out wor­thy, and shorter timelines should re­sult from them. And yes, peo­ple’s prior timelines had been a com­bi­na­tion of too long and based on heuris­tics not much cor­re­lated to ac­tual fu­ture events. But this was an over-re­ac­tion, largely an in­for­ma­tion cas­cade, for a con­fluence of rea­sons.

I left su­per in­vi­go­rated by the un­con­fer­ence, and started writ­ing again.

Meta-note: This should not con­vince any­one of any­thing re­gard­ing AI safety, AI timelines or re­lated top­ics, but I do urge all to treat these ques­tions with the im­por­tance they de­serve. The links in this post by Ray­mond Arnold are a good place to start if you wish to learn more.