Inferential credit history

Here’s an interview with Seth Baum. Seth is an expert in risk analysis and a founder of the Global Catastrophic Risk Institute. As expected, Bill O’Reilly caricatured Seth as extreme, and cut up his interview with dramatic and extreme events from alien films. As a professional provocateur, it is his job to lay the gauntlet down to his guests. Also as expected, Seth put on a calm and confident performance. Was the interview net-positive or negative? It’s hard to say, even in retrospect. Getting any publicity for catastrophic risk reduction is good, and difficult. Still, I’m not sure just how bad publicity has to be before it really is bad publicity…

Explaining catastrophic risks to the audience of Fox News is perhaps equally difficult to explaining the risk of artificial intelligence to anyone. This is a task that frustrated Eliezer Yudkowsky so deeply that he was driven to write the epic LessWrong sequences. In his view, the inferential distance was too large to be bridged by a single conversation. There were too many things that he knew that were prerequisites to understanding his current plan. So he wrote this sequence of online posts that set out everything he knew about cognitive science and probability theory, applied to help readers think more clearly and live out their scientific values. He had to write a thousand words per day for about two years before talking about AI explicitly. Perhaps surprisingly, and as an enormous credit to Eliezer’s brain, these sequences formed the founding manifesto of the quickly growing rationality movement, many of whom now share his concerns about AI. Since he wrote these, his Machine Intelligence Research Institute (formely the singularity Institute) has grown precipitously and spun off the Center for Applied Rationality, a teaching facility and monument to the promotion of public rationality.

Why have Seth and Eliezer had such a hard time? Inferential distance explains a lot, but I have a second explanation, Seth and Eliezer had to build an inferential credit history. By the time you get to the end of the sequences, you have seen Eliezer bridge many an inferential distance, and you trust him to span another! If each time I loan Eliezer some attention, and suspend my disbelief, he has paid me back (in the currency of interesting and useful insight), then I will listen to him saying things that I don’t yet believe for a long time.

When I watch Seth on The Factor, his interview is coloured by his Triple A credit rating. We have talked before, and I have read his papers. For the rest of the audience, he had no time to build intellectual rapport. It’s not just that the inferential distance was large, it’s more that he didn’t have a credit rating of sufficient quality to take out a loan of that magnitude!

I contend that if you want to explain something abstract and unfamiliar, first you have to give a bunch of small and challenging chunks of insight, some of which must be practically applicable, and ideally you will lead your audience on a trek across a series of inferential distances, each slightly bigger than the last. It’ll sure be helpful fills in some of the steps toward understanding the bigger picture, but not necessary.

This proposal could explain why historical explanations are often effective. Explanations that go like:

Initially I wanted to help people. And then I read The Life You Can Save. And then I realised I had been neglecting to think about large numbers of people. And then I read about scope insensitivity, which made me think this, and then I read Bostrom’s Fable of the Dragon Tyrant, which made me think that, and so on…

This kind of explanation is often disorganised, with frequent detours, and false turns – steps in your ideological history that turned out to be wrong or unhelpful. The good thing about historical explanations is that they are stories, and that they have a main character – you – and this all makes the story more compelling. I will argue that a further advantage is that they give you the opportunity to borrow lots of small amounts of your audience’s attention, and accrete a good credit rating, that you will need to make your boldest claims.

Lastly, let me present an alternative philosophy to overcoming inferential distances. It will seem to contradict what I have said so far, although I find it also useful.

If you say that X idea is crazy, then this can often become a self-fulfilling prophesy.

On this view, those who publicise AI risk should never complain about, and rarely talk about the large inferential distance before them, least of all publicy. They should normalise their proposal by treating it as normal. I still think it’s important for them to acknowledge any intuitive reluctance on the part of their audience to entertain an idea. It’s like how if you don’t appear embarrassed after committing a faux-pas, you’re seen as untrustworthy. But after acknowledging this challenge, they had best get back to their subject material, as any normal person would!

So if you believe in inferential distance, inferential credit history (building trust), and acting normal, then explain hard things by first beginning with lots of easy things, build larger and larger bridges, and acknowledge, but beware overemphasising any difficulties.

[also posted on my blog]