A Sketch of Good Communication

“Often I com­pare my own Fermi es­ti­mates with those of other peo­ple, and that’s sort of cool, but what’s way more in­ter­est­ing is when they share what vari­ables and mod­els they used to get to the es­ti­mate.”

- Oliver Habryka, at a model build­ing work­shop at FHI in 2016


One ques­tion that peo­ple in the AI x-risk com­mu­nity of­ten ask is

“By what year do you as­sign a 50% prob­a­bil­ity of hu­man-level AGI?”

We go back and forth with state­ments like “Well, I think you’re not up­dat­ing enough on AlphaGo Zero.” “But did you know that per­son X has 50% in 30 years? You should weigh that heav­ily in your calcu­la­tions.”

How­ever, ‘timelines’ is not the in­ter­est­ing ques­tion. The in­ter­est­ing parts are in the causal mod­els be­hind the es­ti­mates. Some pos­si­bil­ities:

  • Do you have a story about how the brain in fact im­ple­ments back-prop­a­ga­tion, and thus whether cur­rent ML tech­niques have all the key in­sights?

  • Do you have a story about the refer­ence class of hu­man brains and mon­key brains and evolu­tion, that gives a fore­cast for how hard in­tel­li­gence is and as such whether it’s achiev­able this cen­tury?

  • Do you have a story about the amount of re­sources flow­ing into the prob­lem, that uses fac­tors like ‘Num­ber of PhDs in ML handed out each year’ and ‘Amount of GPU available to the av­er­age PhD’?

Timelines is an area where many peo­ple dis­cuss one vari­able all the time, where in fact the in­ter­est­ing dis­agree­ment is much deeper. Re­gard­less of whether our 50% dates are close, when you and I have differ­ent mod­els we will of­ten recom­mend con­tra­dic­tory strate­gies for re­duc­ing x-risk.

For ex­am­ple, Eliezer Yud­kowsky, Robin Han­son, and Nick Bostrom all have differ­ent timelines, but their mod­els tell such differ­ent sto­ries about what’s hap­pen­ing in the world that fo­cus­ing on timelines in­stead of the broad differ­ences in their over­all pic­tures is a red her­ring.

(If in fact two very differ­ent mod­els con­verge in many places, this is in­deed ev­i­dence of them both cap­tur­ing the same thing—and the more differ­ent the two mod­els are, the more likely this fac­tor is ‘truth’. But if two mod­els sig­nifi­cantly dis­agree on strat­egy and out­come yet hit the same 50% con­fi­dence date, and we should not count this as agree­ment.)

Let me sketch a gen­eral model of com­mu­ni­ca­tion.

A Sketch

Step 1: You each have a differ­ent model that pre­dicts a differ­ent prob­a­bil­ity for a cer­tain event.

“I see your prob­a­bil­ity of reach­ing hu­man level AGI in the next 25 years is 0.6, whereas mine is 0.3.”

Step 2: You talk un­til you have un­der­stood how they see the world.

“I un­der­stand that you think that all the fund­ing and ex­cite­ment means that the very best re­searchers of the next gen­er­a­tion will be work­ing on this prob­lem in 10 years or so, and you think there’s a big differ­ence be­tween a lot of av­er­age re­searchers ver­sus hav­ing a few peak re­searchers.”

Step 3: You do some cog­ni­tive work to in­te­grate the ev­i­dences and on­tolo­gies of you and them, and this im­plies a new prob­a­bil­ity.

“I have some mod­els from neu­ro­science that sug­gest the prob­lem is very hard. I’d thought you thought the prob­lem was easy. But I agree that the great­est re­searchers (Feyn­mans, von Neu­mans, etc) can make sig­nifi­cantly big­ger jumps than the me­dian re­searcher.
If we were sim­ply in­creas­ing the ab­solute num­ber of av­er­age re­searchers in the field, then I’d still ex­pect AGI much slower than you, but if now we fac­tor in the very peak re­searchers hav­ing big jumps of in­sight (for the rest of the field to cap­i­tal­ise on), then I think I ac­tu­ally have shorter timelines than you.”

One of the com­mon is­sues I see with dis­agree­ments in gen­eral is peo­ple jump­ing pre­ma­turely to the third di­a­gram be­fore spend­ing time get­ting to the sec­ond one. It’s as though if you both agree on the de­ci­sion node, then you must surely agree on all the other nodes.

I pre­fer to spend an hour or two shar­ing mod­els, be­fore try­ing to change ei­ther of our minds. It oth­er­wise cre­ates false con­sen­sus, rather than suc­cess­ful com­mu­ni­ca­tion. Go­ing di­rectly to Step 3 can be the right call when you’re on a lo­gis­tics team and need to make a de­ci­sion quickly, but is quite in­ap­pro­pri­ate for re­search, and in my ex­pe­rience the most im­por­tant com­mu­ni­ca­tion challenges are around deep in­tu­itions.

Don’t prac­tice com­ing to agree­ment; prac­tice ex­chang­ing mod­els.

Some­thing other than Good Reasoning

Here’s an al­ter­na­tive thing you might do af­ter Step 1. This is where you haven’t changed your model, but de­cide to agree with the other per­son any­way.

This doesn’t make any sense but peo­ple try it any­way, es­pe­cially when they’re talk­ing to high sta­tus peo­ple and/​or ex­perts. “Oh, okay, I’ll try hard to be­lieve what the ex­pert said, so I look like I know what I’m talk­ing about.”

This last one is the worst, be­cause it means you can’t no­tice your con­fu­sion any more. It rep­re­sents “Ah, I no­tice that p = 0.6 is in­con­sis­tent with my model, there­fore I will throw out my model.” Equiv­a­lently, “Oh, I don’t un­der­stand some­thing, so I’ll stop try­ing.


This is the first post in a se­ries of short thoughts on epistemic ra­tio­nal­ity, in­tegrity, and cu­ri­os­ity. My thanks to Ja­cob Lager­ros and Alex Zhu for com­ments on drafts.

De­scrip­tions of your ex­pe­riences of suc­cess­ful com­mu­ni­ca­tion about sub­tle in­tu­itions (in any do­main) are wel­comed.