Inferential credit history

Here’s an in­ter­view with Seth Baum. Seth is an ex­pert in risk anal­y­sis and a founder of the Global Catas­trophic Risk In­sti­tute. As ex­pected, Bill O’Reilly car­i­ca­tured Seth as ex­treme, and cut up his in­ter­view with dra­matic and ex­treme events from alien films. As a pro­fes­sional provo­ca­teur, it is his job to lay the gaunt­let down to his guests. Also as ex­pected, Seth put on a calm and con­fi­dent perfor­mance. Was the in­ter­view net-pos­i­tive or nega­tive? It’s hard to say, even in ret­ro­spect. Get­ting any pub­lic­ity for catas­trophic risk re­duc­tion is good, and difficult. Still, I’m not sure just how bad pub­lic­ity has to be be­fore it re­ally is bad pub­lic­ity…

Ex­plain­ing catas­trophic risks to the au­di­ence of Fox News is per­haps equally difficult to ex­plain­ing the risk of ar­tifi­cial in­tel­li­gence to any­one. This is a task that frus­trated Eliezer Yud­kowsky so deeply that he was driven to write the epic LessWrong se­quences. In his view, the in­fer­en­tial dis­tance was too large to be bridged by a sin­gle con­ver­sa­tion. There were too many things that he knew that were pre­req­ui­sites to un­der­stand­ing his cur­rent plan. So he wrote this se­quence of on­line posts that set out ev­ery­thing he knew about cog­ni­tive sci­ence and prob­a­bil­ity the­ory, ap­plied to help read­ers think more clearly and live out their sci­en­tific val­ues. He had to write a thou­sand words per day for about two years be­fore talk­ing about AI ex­plic­itly. Per­haps sur­pris­ingly, and as an enor­mous credit to Eliezer’s brain, these se­quences formed the found­ing man­i­festo of the quickly grow­ing ra­tio­nal­ity move­ment, many of whom now share his con­cerns about AI. Since he wrote these, his Ma­chine In­tel­li­gence Re­search In­sti­tute (formely the sin­gu­lar­ity In­sti­tute) has grown pre­cip­i­tously and spun off the Cen­ter for Ap­plied Ra­tion­al­ity, a teach­ing fa­cil­ity and mon­u­ment to the pro­mo­tion of pub­lic ra­tio­nal­ity.

Why have Seth and Eliezer had such a hard time? In­fer­en­tial dis­tance ex­plains a lot, but I have a sec­ond ex­pla­na­tion, Seth and Eliezer had to build an in­fer­en­tial credit his­tory. By the time you get to the end of the se­quences, you have seen Eliezer bridge many an in­fer­en­tial dis­tance, and you trust him to span an­other! If each time I loan Eliezer some at­ten­tion, and sus­pend my dis­be­lief, he has paid me back (in the cur­rency of in­ter­est­ing and use­ful in­sight), then I will listen to him say­ing things that I don’t yet be­lieve for a long time.

When I watch Seth on The Fac­tor, his in­ter­view is coloured by his Triple A credit rat­ing. We have talked be­fore, and I have read his pa­pers. For the rest of the au­di­ence, he had no time to build in­tel­lec­tual rap­port. It’s not just that the in­fer­en­tial dis­tance was large, it’s more that he didn’t have a credit rat­ing of suffi­cient qual­ity to take out a loan of that mag­ni­tude!

I con­tend that if you want to ex­plain some­thing ab­stract and un­fa­mil­iar, first you have to give a bunch of small and challeng­ing chunks of in­sight, some of which must be prac­ti­cally ap­pli­ca­ble, and ideally you will lead your au­di­ence on a trek across a se­ries of in­fer­en­tial dis­tances, each slightly big­ger than the last. It’ll sure be helpful fills in some of the steps to­ward un­der­stand­ing the big­ger pic­ture, but not nec­es­sary.

This pro­posal could ex­plain why his­tor­i­cal ex­pla­na­tions are of­ten effec­tive. Ex­pla­na­tions that go like:

Ini­tially I wanted to help peo­ple. And then I read The Life You Can Save. And then I re­al­ised I had been ne­glect­ing to think about large num­bers of peo­ple. And then I read about scope in­sen­si­tivity, which made me think this, and then I read Bostrom’s Fable of the Dragon Tyrant, which made me think that, and so on…

This kind of ex­pla­na­tion is of­ten di­s­or­ganised, with fre­quent de­tours, and false turns – steps in your ide­olog­i­cal his­tory that turned out to be wrong or un­helpful. The good thing about his­tor­i­cal ex­pla­na­tions is that they are sto­ries, and that they have a main char­ac­ter – you – and this all makes the story more com­pel­ling. I will ar­gue that a fur­ther ad­van­tage is that they give you the op­por­tu­nity to bor­row lots of small amounts of your au­di­ence’s at­ten­tion, and ac­crete a good credit rat­ing, that you will need to make your bold­est claims.

Lastly, let me pre­sent an al­ter­na­tive philos­o­phy to over­com­ing in­fer­en­tial dis­tances. It will seem to con­tra­dict what I have said so far, al­though I find it also use­ful.

If you say that X idea is crazy, then this can of­ten be­come a self-fulfilling proph­esy.

On this view, those who pub­li­cise AI risk should never com­plain about, and rarely talk about the large in­fer­en­tial dis­tance be­fore them, least of all pub­licy. They should nor­mal­ise their pro­posal by treat­ing it as nor­mal. I still think it’s im­por­tant for them to ac­knowl­edge any in­tu­itive re­luc­tance on the part of their au­di­ence to en­ter­tain an idea. It’s like how if you don’t ap­pear em­bar­rassed af­ter com­mit­ting a faux-pas, you’re seen as un­trust­wor­thy. But af­ter ac­knowl­edg­ing this challenge, they had best get back to their sub­ject ma­te­rial, as any nor­mal per­son would!

So if you be­lieve in in­fer­en­tial dis­tance, in­fer­en­tial credit his­tory (build­ing trust), and act­ing nor­mal, then ex­plain hard things by first be­gin­ning with lots of easy things, build larger and larger bridges, and ac­knowl­edge, but be­ware overem­pha­sis­ing any difficul­ties.

[also posted on my blog]