RSS

In­fer­en­tial Distance

TagLast edit: 27 Oct 2021 10:06 UTC by PeterH

Inferential Distance between two people with respect to an item of knowledge is the amount of steps or concepts a person needs to share before they can successfully communicate the object level point. This can be thought of as the missing foundation or building block concepts needed to think clearly about a specific thing.

In Expecting Short Inferential Distances, Eliezer Yudkowsky posits that humans systematically underestimate inferential distances.

And if you think you can explain the concept of “systematically underestimated inferential distances” briefly, in just a few words, I’ve got some sad news for you . . . – Expecting Short Inferential Distances

Example: Evidence for Evolution

Explaining the evidence for the theory of evolution to a physicist would be easy; even if the physicist didn’t already know about evolution, they would understand the concepts of evidence, Occam’s razor, naturalistic explanations, and the general orderly nature of the universe. Explaining the evidence for the theory of evolution to someone without a science background would be much harder. Before even mentioning the specific evidence for evolution, you would have to explain the concept of evidence, why some kinds of evidence are more valuable than others, what does and doesn’t count as evidence, and so on. This would be unlikely to work during a short conversation.

There is a short inferential distance between you and the physicist; there is a very long inferential distance between you and the person without any science background. Many members of Less Wrong believe that expecting short inferential distances is a classic error. It is also a very difficult problem to solve, since most people will feel offended if you explicitly say that there is too great an inferential distance between you to explain a theory properly. Some people have attempted to explain this through evolutionary psychology: in the ancestral environment, there was minimal difference in knowledge between people, and therefore no need to worry about inferential distances.

External Links

See Also

Ex­pect­ing Short In­fer­en­tial Distances

Eliezer Yudkowsky22 Oct 2007 23:42 UTC
338 points
106 comments3 min readLW link

Illu­sion of Trans­parency: Why No One Un­der­stands You

Eliezer Yudkowsky20 Oct 2007 23:49 UTC
158 points
52 comments3 min readLW link

Dou­ble Illu­sion of Transparency

Eliezer Yudkowsky24 Oct 2007 23:06 UTC
113 points
33 comments3 min readLW link

Great minds might not think alike

Eric Neyman26 Dec 2020 19:51 UTC
301 points
45 comments11 min readLW link
(ericneyman.wordpress.com)

Write a Thou­sand Roads to Rome

Screwtape8 Feb 2018 18:09 UTC
105 points
17 comments4 min readLW link

Pre­sump­tive Listen­ing: stick­ing to fa­mil­iar con­cepts and miss­ing the outer rea­son­ing paths

Remmelt27 Dec 2022 15:40 UTC
−14 points
8 comments2 min readLW link
(mflb.com)

Ex­plain­ers Shoot High. Aim Low!

Eliezer Yudkowsky24 Oct 2007 1:13 UTC
97 points
35 comments1 min readLW link

Ex­pan­sive trans­la­tions: con­sid­er­a­tions and possibilities

ozziegooen18 Sep 2020 15:39 UTC
43 points
15 comments6 min readLW link

In­fer­en­tial silence

Kaj_Sotala25 Sep 2013 12:45 UTC
80 points
58 comments1 min readLW link

Karate Kid and Real­is­tic Ex­pec­ta­tions for Disagree­ment Resolution

Raemon4 Dec 2019 23:25 UTC
101 points
23 comments4 min readLW link

Start­ing point for calcu­lat­ing in­fer­en­tial dis­tance?

JenniferRM3 Dec 2010 20:20 UTC
22 points
9 comments2 min readLW link

Com­plex­ity: in­her­ent, cre­ated, and hidden

Swimmer963 (Miranda Dixon-Luinenburg) 14 Sep 2011 14:33 UTC
9 points
49 comments4 min readLW link

Gen­er­al­iz­ing From One Example

Scott Alexander28 Apr 2009 22:00 UTC
429 points
422 comments6 min readLW link

Un­der­stand­ing is translation

cousin_it28 May 2018 13:56 UTC
92 points
23 comments1 min readLW link

An In­tu­itive Ex­pla­na­tion of In­fer­en­tial Distance

RichardJActon26 Nov 2017 14:13 UTC
14 points
6 comments3 min readLW link

Bridg­ing In­fer­en­tial Gaps

atucker8 Dec 2010 4:50 UTC
13 points
20 comments2 min readLW link

Zetetic explanation

Benquo27 Aug 2018 0:12 UTC
90 points
138 comments6 min readLW link
(benjaminrosshoffman.com)

Un­teach­able Excellence

Eliezer Yudkowsky2 Mar 2009 15:33 UTC
46 points
41 comments2 min readLW link

Take heed, for it is a trap

Zed14 Aug 2011 10:23 UTC
56 points
189 comments6 min readLW link

The Sally-Anne fallacy

philh11 Apr 2016 13:06 UTC
64 points
20 comments1 min readLW link

In­fer­en­tial credit history

RyanCarey24 Jul 2013 14:12 UTC
58 points
36 comments3 min readLW link

Men­tal Metadata

Alicorn30 Mar 2011 3:07 UTC
47 points
35 comments3 min readLW link

Map­ping Another’s Universe

squidious17 Nov 2017 2:37 UTC
11 points
5 comments3 min readLW link

Why do peo­ple ____?

magfrump4 May 2012 4:20 UTC
36 points
255 comments1 min readLW link

Blind Goal­ten­ders: Un­pro­duc­tive Disagreements

PDV28 Sep 2017 16:19 UTC
19 points
8 comments2 min readLW link

Tak­ing the Out­group Seriously

Davis_Kingsley16 Feb 2020 13:23 UTC
21 points
8 comments2 min readLW link

Men­tal Crystallography

Alicorn27 Feb 2010 1:04 UTC
30 points
59 comments2 min readLW link

De­bug­ging the student

Adam Zerner16 Dec 2020 7:07 UTC
46 points
7 comments4 min readLW link

[Link] Thick and thin

[deleted]6 Jun 2012 12:08 UTC
39 points
12 comments2 min readLW link

Ar­gu­ing from a Gap of Perspective

ideenrun1 May 2021 22:42 UTC
6 points
1 comment19 min readLW link

Monks of Magnitude

[DEACTIVATED] Duncan Sabien18 Feb 2022 7:48 UTC
107 points
50 comments5 min readLW link1 review

On Suc­cess­ful Com­mu­ni­ca­tion Across a Wide In­fer­en­tial Distance

Mahdi Complex22 Apr 2022 20:08 UTC
6 points
5 comments2 min readLW link

[Question] What Do AI Safety Pitches Not Get About Your Field?

Aris22 Sep 2022 21:27 UTC
28 points
3 comments1 min readLW link

Tim Bern­ers-Lee found it hard to ex­plain the web

Tor Økland Barstad10 Apr 2023 13:33 UTC
25 points
2 comments1 min readLW link

[SEE NEW EDITS] No, *You* Need to Write Clearer

NicholasKross29 Apr 2023 5:04 UTC
254 points
64 comments5 min readLW link
(www.thinkingmuchbetter.com)

The Typ­i­cal Sex Life Fallacy

ozymandias7 Oct 2017 21:48 UTC
38 points
30 comments5 min readLW link

Refram­ing mis­al­igned AGI’s: well-in­ten­tioned non-neu­rotyp­i­cal assistants

zhukeepa1 Apr 2018 1:22 UTC
46 points
14 comments2 min readLW link

Ex­pected Pain Parameters

Alicorn14 Jul 2018 19:30 UTC
87 points
12 comments2 min readLW link

The Power to Teach Con­cepts Better

Liron23 Sep 2019 0:21 UTC
89 points
22 comments8 min readLW link1 review

In My Culture

[DEACTIVATED] Duncan Sabien7 Mar 2019 7:22 UTC
66 points
59 comments1 min readLW link2 reviews
(medium.com)

LW Women- Min­i­miz­ing the In­fer­en­tial Distance

daenerys25 Nov 2012 23:33 UTC
97 points
1,261 comments7 min readLW link

Be­ing a teacher

Swimmer963 (Miranda Dixon-Luinenburg) 14 Mar 2011 20:03 UTC
80 points
155 comments3 min readLW link

Typ­i­cal Mind and Politics

Scott Alexander12 Jun 2009 12:28 UTC
58 points
133 comments6 min readLW link

Dominus’ Razor

badger26 May 2011 1:05 UTC
61 points
27 comments1 min readLW link

Over­com­ing the Curse of Knowledge

JesseGalef18 Oct 2011 17:39 UTC
59 points
56 comments3 min readLW link

Per­son­hood is a Reli­gious Belief

jan Sijan3 May 2023 16:16 UTC
−42 points
28 comments6 min readLW link

Teach­ing the Unteachable

Eliezer Yudkowsky3 Mar 2009 23:14 UTC
56 points
18 comments6 min readLW link

False Friends and Tone Policing

palladias18 Jun 2014 18:20 UTC
71 points
49 comments3 min readLW link

A few analo­gies to illus­trate key ra­tio­nal­ity points

kilobug9 Oct 2011 13:00 UTC
69 points
52 comments5 min readLW link