Inspiring Rationalists

I am concerned with the number of rational, altruistic, smart people in the world. By my standards there are a reasonable number of smart, altruistic (in far mode) people, so a good first step is improving their rationality. To keep “rationality” grounded, I should be a little more precise about what I mean.

There are people who care about the needs of others, while they are in far mode at least, and who are good at solving hard problems. I believe many of them fail to apply their problem solving capabilities effectively to some important questions: “What do I believe about the world, and why?” “What do I value?” “Based on those beliefs and values, what should I spend my time doing?” Worse, they sometimes arrive at answers to these questions which are seriously affected by various biases which (I believe) they would eventually recognize as biases. This behavior defines irrationality.

Suppose that someone has successfully created a sequence of readings which contained everything you might need to know in order to become reasonably rational. The accessibility of these readings is not in itself adequate to make smart, altruistic people automatically rational. Based on my experience with humans, I strongly suspect that personal engagement is ultimately necessary, at least if the smart people in question are living in the same culture I am (and the best way to change culture is to change people). What needs to be accomplished with this personal engagement, so that the mere accessibility of written wisdom could do the rest?

Here is my take on the requirements.

1) They need to believe that thinking about their beliefs, values, thought process, and decision making process is valuable. They need to have a strong sense that self-improvement along these axes is valuable; they need to believe that more is possible.

2) They need to be able to act on the basis of abstract reasoning. Most people seem to have a very firm barrier between the part of them that learns things and the part that makes decisions; no matter how much the learning part believes that the decision making part should listen to it, it doesn’t do any good unless the decision making part actually listens. This seems to be a rather serious problem in general.

3) They need to admit the possibility that some beliefs they hold strongly are wrong. Beliefs in which a person places complete confidence seem to be able to defeat an arbitrary static argument in the overwhelming majority of cases. As the belief holder gets smarter, the range of truths (or probably-truths) which need to be destroyed broadens and the surety with which they will be destroyed increases.

4) They need to be able to understand moderately complex arguments. There is a basic level of understanding which can be used to bootstrap to much higher levels given only written text; getting to that basic level requires something else. Describing Bayes’ theorem, for example, to someone who lacks this basic level of understanding seems to be extremely difficult. In general I think this problem is more or less insignificant compared to (1) or (2).

As for resolving them:

1) The obvious solution is a good exemplar, popularly disseminated and presented in a way that makes a clear case. Its a little troubling that the best instantiation right now is fan fiction (to my knowledge). Much better would be real role models; for example, being able to point to one or more small group of rationalists successfully pursuing some extremely visible indicator of success.

Another effective way to instill this belief in a particular person may be to point out a manifestly and seriously negative consequence of some failure of their rationality. I am not aware of any really compelling widely applicable examples (deconversion may work excellently for people who have gone from being quite religious to being quite areligious).

A slightly underhanded approach which might be more effective is to deliberately and repeatedly create situations in which a failure of rationality has manifestly negative consequences. The best insantiations I know are simple tests which reveal cognitive biases, but versions more plausibly analogous to everyday experience may be much more powerful. For example, I would be very interested to understand how people’s responses to simple tests for confirmation bias etc. change when the way the test is presented changes: I can just tell you about the experience, which appears to be completely worthless; I can play a game against you, to which most people I know respond after losing by not caring at all, despite apparently trying; I don’t know what happens if I play the game with volunteer test subjects who are paid $10 if they succeed. I am slightly more optimistic about this approach than the last one because I think less time has been spent thinking about it.

2)