Bridging Inferential Gaps and Explaining Rationality to Other People

This post is going in discussion until I get it edited enough that I feel like its post-worthy, or if it does well.


Core Post:

Rationality has helped me do a lot of things (in the past year: being elected President of my robotics team, getting a girlfriend, writing good college apps (and getting into a bunch of good schools), etc.), and I feel sort of guilty for not helping other people use it.

I had made progress on a lot of those fronts before, but a bunch of things fell into place in a relatively short period of time after I started trying to optimize them. Some of my friends have easyish problems, but unsolicited risky counterintuitive advice is uncouth and unhelpful.

More pressingly, I want to pass on a lot of rationality knowledge to people I know before I graduate high school. Being in a fairly good Math/​Science/​Computer Science Magnet Program, I have access to a lot of smart, driven people who have a lot of flexibility in their lives and I think it would be a shame if there were things I could tell them that would make them do a lot better. On top of that, I want to pass on this knowledge within my robotics team so that they continue doing well.

Basically, I want to learn how to explain useful rationality concepts to other people in a non-annoying and effective way. As far as I can tell, many people want to do similar things, and find it difficult to do so.

I suspect that this topic is broad enough that it would be hard for a single person to tackle it in one post. So that people don’t need to have enough information for an entire post (which, would be awesome by the way) before they talk about it, here’s a thread to respond to.

I’d particularly like to encourage people who have successfully bridged inferential distances to reply with where people started and how the conversation went. Please. An example:

In my Origins of Science (basically a philosophy) class, a conversation like this (paraphrased, happened a few days ago) took place. I’m not sure where the other people in the class started, but it got them to the point that they understood how you model reality, but that beliefs are supposed to reflect reality, and you can’t just make things up entirely.

W: “I feel like if people want to think God exists, then God exists for them, but if they want to ignore him then he won’t.”

me: “But that’s not how existing works. In our thoughts and opinions, we make a map of how the world exists. But the map is not the territory.”

W: “But it will still seem real to you...”

me: “Like, you can put whatever you want in your map like dragons or whatever, but that doesn’t actually put dragons in the territory. And now its a failure of your map to reflect the territory, not of the territory to reflect your map”

I could have said the last part better, but I definitely remember saying the last sentence.

The map vs. territory example seems to be really effective, a few people complimented it (and I admitted that I had read it somewhere else). Not sure how much it propagates into other beliefs, I’ll update later with how much it seems to affect later conversations in the class.

Questions:

What basic rationality ideas are the most helpful to the most people?

Would it be helpful to try and categorize where people are inferentially? Is it possible?

Observations:

  • Inferential Distance is a big deal. Hence the first part of the title. I was able to explain transhumanism to someone in 3 minutes, and have them totally agree. Other people don’t even accept the possibility of AI, let alone that morality can happen when God doesn’t exist.

  • Its much easier to convince people who know and like you.

  • There’s a difference between getting someone to ostensibly agree with something, and getting it to propagate through their beliefs.

  • People remember rationality best when they benefit from learning it, and it applies to what they’re specifically trying to do.

  • It’s difficult to give someone specific advice and have them pick up on the thought process that you used to come up with it.

  • Atheists seem to be pretty inferentially close to Singularity-cluster ideas.

  • From an earlier post I got a bunch of helpful feedback, particularly from Nornagest’s comment and TheOtherDave. The short versions:

    • Asking people to do specific things is creepy, teaching someone is much more effective if you just tell them the facts and let them do whatever they want with it.

    • People need specifics to actually do something, and its hard to make them decide to do something substantially different than what they already are doing

  • And from a comment by David Gerard: People need to want to learn/​do something, its hard to push them into it.

  • A lot of people are already doing useful things (research, building businesses), so it might be more helpful to make a bunch of them better than a few of them do something entirely different.