[Y]ou are not, in general, safe if you reflect on yourself and achieve internal coherence. The Anti-Inductors who compute that the probability of the coin coming up heads on the next occasion, decreases each time they see the coin come up heads, may defend their anti-induction by saying: “But it’s never worked before!”
The fact of the matter is: either you are that crazy that you will be incapable of developing a rationality that works … or you aren’t. If you are, you will lose. If you aren’t, you can probably judge the rationality you have according to the rational arguments you have to develop a better rationality.
Just had a look at what Eliezer said there. I think it’s not quite the same thing as what I’m talking about here. It’s true that if you have in your mind a system of rationality—that leads you in a rational way to improve what you have over time. I agree this works if you have the required intelligence and don’t start with an entirely pathological system of rationality.
Let me give a slightly more concrete example. I had a conversation some time ago regarding homeopathy—that branch of alternative medicine that uses ingredients which have been diluted down by a factor of 10 - in this case 120 times in succession. This results in an overall dilution of 1 in 10^120. Since there are only 10^85 or so atoms in the entire observable universe, this provides a very high degree of certainty that there is none of the active ingredient in the homeopathic bottle that this person swore was highly effective.
Pointing this out had no effect, as you might expect. In fact, the power of the treatment is said to rise as it becomes more dilute. The person absolutely believed in the power of that remedy, even though they agreed with my argument that in fact there were no molecules of the original substance in the bottle. I don’t suppose talking about placebos and hypnotic suggestion would have made any difference either—in fact I believe I did mention the placebo effect. No difference at all.
We’ve all come across stuff like this. My point is that the applicability of rationality is what is at issue in arguments like this. I say it is—they say that in some way it isn’t. My argument stops me from buying the homeopathic remedy, but it is almost irrelevant to the other person because rationality itself is what is at issue.
Sort of. And we all know the answer to that question is that it’s often completely impossible.
Some of the examples in the article are matters where human hardware tends to lead us in the wrong direction. But others—particularly the Albanian case, are to a large extent failures of intent. Good quality rationality is a long term investment that many people choose not to make. The result is vulnerability to believing impossible things. Irrationality is often a choice, and I think that, long term, our failure to be rational springs as much from choosing not to be as much as it does from failures in execution when sincerely trying to be. You can compensate, to a degree, for our hardware based inclinations to see patterns where none exist, or stick with what we have. But nothing compensates for choosing the irrational.
We can all see that irrationality is expensive to varying degrees depending on what you do. But this is only convincing to those of us who are already convinced and don’t need to know. So what was the article intending to do?
I think many of us have considered these ideas before. Eliezer Yudkowsky certainly has.
The fact of the matter is: either you are that crazy that you will be incapable of developing a rationality that works … or you aren’t. If you are, you will lose. If you aren’t, you can probably judge the rationality you have according to the rational arguments you have to develop a better rationality.
Just had a look at what Eliezer said there. I think it’s not quite the same thing as what I’m talking about here. It’s true that if you have in your mind a system of rationality—that leads you in a rational way to improve what you have over time. I agree this works if you have the required intelligence and don’t start with an entirely pathological system of rationality.
Let me give a slightly more concrete example. I had a conversation some time ago regarding homeopathy—that branch of alternative medicine that uses ingredients which have been diluted down by a factor of 10 - in this case 120 times in succession. This results in an overall dilution of 1 in 10^120. Since there are only 10^85 or so atoms in the entire observable universe, this provides a very high degree of certainty that there is none of the active ingredient in the homeopathic bottle that this person swore was highly effective.
Pointing this out had no effect, as you might expect. In fact, the power of the treatment is said to rise as it becomes more dilute. The person absolutely believed in the power of that remedy, even though they agreed with my argument that in fact there were no molecules of the original substance in the bottle. I don’t suppose talking about placebos and hypnotic suggestion would have made any difference either—in fact I believe I did mention the placebo effect. No difference at all.
We’ve all come across stuff like this. My point is that the applicability of rationality is what is at issue in arguments like this. I say it is—they say that in some way it isn’t. My argument stops me from buying the homeopathic remedy, but it is almost irrelevant to the other person because rationality itself is what is at issue.
Wait, are you asking how to convince an irrational human being to be rational?
Sort of. And we all know the answer to that question is that it’s often completely impossible.
Some of the examples in the article are matters where human hardware tends to lead us in the wrong direction. But others—particularly the Albanian case, are to a large extent failures of intent. Good quality rationality is a long term investment that many people choose not to make. The result is vulnerability to believing impossible things. Irrationality is often a choice, and I think that, long term, our failure to be rational springs as much from choosing not to be as much as it does from failures in execution when sincerely trying to be. You can compensate, to a degree, for our hardware based inclinations to see patterns where none exist, or stick with what we have. But nothing compensates for choosing the irrational.
We can all see that irrationality is expensive to varying degrees depending on what you do. But this is only convincing to those of us who are already convinced and don’t need to know. So what was the article intending to do?
So yes—sort of.
Not to sound insufficiently pessimistic, but I don’t think that’s been rigorously established. It doesn’t seem impossible to raise the sanity waterline—it seems more likely that we have inferential distances to cross and armors built to protect false beliefs we must pierce.
I like this comment. Given the historical improvements that have already come about, it can’t be unreasonable to look for more.