But this is just unfair. You’re judging rationality according to rational arguments, and so OF COURSE you end up finding that rationality is sooo much better.
I, on the other hand, judge my irrationality on an irrational basis, and find that actually it’s much better to be irrational.
What’s the difference? Of course in response to this question you’re bound to come up with even more rational arguments to be rational, but I don’t see how this gets you any further forward.
I, on the other hand, being irrational, don’t have to argue about this if I don’t want to. What kind of sense would it make to argue rationally about the advantages of irrationality anyway? Surely this is a contradiction in terms? But the nice thing about being irrational is that I can irrationally use rationality from time to time anyway, and then just stop and go back to being irrational again when irrationality is clearly more inspired.
OK—so I’m messing about. But you can’t prove rationality is more rational by rational argument. Well, you can, but it’s irrational in a way as you’re assuming the very thing you’re trying to prove. It’s as example of trying to pick yourself up by your own bootstraps.
A rationality-resister doesn’t mean the same thing with “rationality” as a rationalist does. It’s just something that nerds refer to to humiliate them in debates. Not techniques for constructing a map that fits the territory, etc. They probably don’t even have a concept for that.
Therefore: maybe you can turn rationality-resisters into rationalists if you teach them rationality but don’t call it rationality and don’t attack their beliefs. (Eliezer’s sequences taught rationality without attacking, for example, religion (too often), but a lot of people were probably turned off from reading them by just the word rationality).
If a person were completely irrational, sure, this post wouldn’t convince them. (Of course, they also wouldn’t live for long.) But it never tried to convince completely irrational people, it tried to convince people who were kinda-rational already that investing effort in further improving their rationality would be worth it.
I guess in the context for which it’s intended, it works OK. It’s a book introduction, after all. Most irrational people don’t really have much of a map of where and how they are being irrational—and in fact commonly consider themselves to be very rational persons, when it comes to it. (I strongly suspect this is like driving—most people considering themselves to be above average—where being incompetent also robs you of awareness of your lack of performance...) The common reaction would probably be to nod along, thinking how terrible it is that all these people are so irrational, and enjoy the read. For an introduction, that’s good enough.
[Y]ou are not, in general, safe if you reflect on yourself and achieve internal coherence. The Anti-Inductors who compute that the probability of the coin coming up heads on the next occasion, decreases each time they see the coin come up heads, may defend their anti-induction by saying: “But it’s never worked before!”
The fact of the matter is: either you are that crazy that you will be incapable of developing a rationality that works … or you aren’t. If you are, you will lose. If you aren’t, you can probably judge the rationality you have according to the rational arguments you have to develop a better rationality.
Just had a look at what Eliezer said there. I think it’s not quite the same thing as what I’m talking about here. It’s true that if you have in your mind a system of rationality—that leads you in a rational way to improve what you have over time. I agree this works if you have the required intelligence and don’t start with an entirely pathological system of rationality.
Let me give a slightly more concrete example. I had a conversation some time ago regarding homeopathy—that branch of alternative medicine that uses ingredients which have been diluted down by a factor of 10 - in this case 120 times in succession. This results in an overall dilution of 1 in 10^120. Since there are only 10^85 or so atoms in the entire observable universe, this provides a very high degree of certainty that there is none of the active ingredient in the homeopathic bottle that this person swore was highly effective.
Pointing this out had no effect, as you might expect. In fact, the power of the treatment is said to rise as it becomes more dilute. The person absolutely believed in the power of that remedy, even though they agreed with my argument that in fact there were no molecules of the original substance in the bottle. I don’t suppose talking about placebos and hypnotic suggestion would have made any difference either—in fact I believe I did mention the placebo effect. No difference at all.
We’ve all come across stuff like this. My point is that the applicability of rationality is what is at issue in arguments like this. I say it is—they say that in some way it isn’t. My argument stops me from buying the homeopathic remedy, but it is almost irrelevant to the other person because rationality itself is what is at issue.
Sort of. And we all know the answer to that question is that it’s often completely impossible.
Some of the examples in the article are matters where human hardware tends to lead us in the wrong direction. But others—particularly the Albanian case, are to a large extent failures of intent. Good quality rationality is a long term investment that many people choose not to make. The result is vulnerability to believing impossible things. Irrationality is often a choice, and I think that, long term, our failure to be rational springs as much from choosing not to be as much as it does from failures in execution when sincerely trying to be. You can compensate, to a degree, for our hardware based inclinations to see patterns where none exist, or stick with what we have. But nothing compensates for choosing the irrational.
We can all see that irrationality is expensive to varying degrees depending on what you do. But this is only convincing to those of us who are already convinced and don’t need to know. So what was the article intending to do?
I removed the upvote that someone had placed on this because the above reasoning is intuitive but wrong. We can conclude that it is wrong for the same reason and with the same confidence that we can conclude “all X are Y; B is Y; therefore, B is X” is wrong.
To evaluate claims we must use the best tools of reasoning that are available to us. This applies whether or not we are evaluating claims about reasoning itself.
I removed the upvote that someone had placed on this because the above reasoning is intuitive but wrong. We can conclude that it is wrong for the same reason and with the same confidence that we can conclude “all X are Y; B is Y; therefore, B is X” is wrong.
To evaluate claims we must use the best tools of reasoning that are available to us. This applies whether or not we are evaluating claims about reasoning itself.
Splendid. Here is a perfect example of a rational reason to be rational. I did say someone would be bound to come up with that.
I don’t understand your all X is Y comment—I understand the logical fallacy, but not why you think it’s relevant.
You said...”To evaluate claims, we must use the best tools of reasoning that are available to us?” You might, but not everybody does. Why should we always use reasoning? Not everybody does. And some of us mean not to......
I am still messing about—I do believe that rationality is the right thing to do. But some are deadly serious when they say that your use of reason blinds you to the important stuff. My point is that you cannot prove them wrong, for the very applicability of proof on such matters is what is at issue. Obviously they won’t accept that your proof is a good answer. They are right—it is only relevant insofar as argument itself is relevant, which is what you are disagreeing about !
Irrationally, I decided I’d upvote you because your reasoning is just as wrong as mine.
Let me start by trying to summarise Eliezer’s argument—perhaps using slightly different terminology. If I have a given stream of sensory experience, what is the correct interpretation of it? I would say it is that which allows you to compress the stream (and the theory with which you explain it) down to the smallest possible size. You can then use this as a means of making a prediction of what the next bit of the sensory stream might be.
This has quite a few nice features—not least of which is if someone else comes up with a different interpretation of the stream, you can simply weigh it against yours, and if theirs weighs more, that renders it statistically more unlikely, and statistically more likely to give incorrect predictions as well. And weighing compressed data sets is mathematics, and not a matter of opinion. You can reasonably say that their heavier ‘interpretation’ is adding information that you know—from your compression—is not in the stream. Where did this extra information come from? It’s just wrong.
The next question is obvious—how could someone possibly consider themselves to be logically right to do something other than this? And here Eliezer is home and dry—this is the only logically right thing to do. Anyone doing something different is assuming the laws of statistics and reason do not apply to them. In all of this I’m with Eliezer all the way. It’s inductive reasoning, which means we only have expectations about what we have not yet seen, and not certainty. But at least we know that we can’t do better.
All of this is beyond question, and not my point. There is another major choice, which is to disbelieve rationality altogether, or regard it as of limited applicability. Throw it out—baby, bathwater, everything. And trust something else instead. And explicitly believe that this something else is NOT a purely rational means of truth, but something else. This gives you absolute license to impose any number of interpretations on the data. Of course the rationalists are blind—they tell me this data only tells me X, but I can see so much more in it than that! Two and two may be four, rationally, but in fact the whole is more than the sum of its parts. If someone proves that the extra stuff isn’t actually in the data, well fine—I knew that. These things aren’t knowable by the rational mind, one needs divine revelation, or natural intuition, or spiritual sensitivity..… One comes to believe the world is fundamentally not rational, not rationally explainable, not rationally reducible, and certainly not statistically analyzable. Forget all that stuff, and just trust your animal instincts.
And here you end at an impasse. Eliezer, in his article, states that he expects nature to give such irrational persons a few lessons in the school of hard knocks. They are living in a mindset full of confabulations and perceptual artifacts. The irrationalists would see him as living his life with his head under a bucket, restricted to what logic can tell him, and missing out on every other part of his humanity.
Who is right? Rationally, Eliezer. Irrationally, I have no idea—is there even such a thing as ‘right’ in this case? Would I even care? If one denies rationality, one can believe in anything, if believe is the right word for it.
Just to be clear, I do not believe in extra-rational means of knowledge, and I believe rationality to be universally applicable. But I regard this as a belief, as any attempt at proof is begging this question on one side or the other.
It’s not a “rationalist” thing, it’s a human thing. What are you evaluating the adequacy of rituals of cognition with? You’re already what you are, which is what you use. There are no universally convincing arguments, and one accepts, say, Occam’s razor, not because it’s “rational”, but because we are the kind of agents that are compelled by this principle. Don’t distinguish between “rational” and “magical”, ask what moves you, on reflection, what do you believe to get you the results, and whether you believe the argument for why it does.
Believe it or not Vladmir, Eliezer and I all understand the limitations of thought, the dependence on initial priors. Searching for “anti-inductive” will get you some hits. That we still claim that you need to use every resource you have at your disposal to evaluate your resources is significant.
Who is right? Rationally, Eliezer. Irrationally, I have no idea—is there even such a thing as ‘right’ in this case?
Eliezer, no, no.
If one denies rationality, one can believe in anything, if believe is the right word for it.
There is one line of reasoning that I find is actually more effective on irrational people than rational ones. Argumentum ad baculum.
You’re starting one premise back from where the post is jumping off from.
The post assumes as a premise that we have some goals, and there’s an empirical question about what cognitive strategies will best achieve those goals.
One strategy is to use our intuitions. We have some built in software for doing quasi-Bayesian analysis, and our heuristics perform to an acceptable standard in many contexts.
The other strategy is to use more formal analysis. The post argues for this second strategy, pointing out predictable failure points for our intuition heuristics.
Which one maximises your chances of achieving your goals is an empirical question. It’s possible that as bad as our heuristics are, we’re just incapable of the metacognition to do it formally. Maybe all we end up doing is giving ourselves tools for rationalisation. Most of the people in this community don’t believe that, but it’s not a philosophical question.
You go one premise back, to the point where we’re choosing a strategy. Sure, you can reject rationality altogether. Then it doesn’t make sense to talk about deciding on a cognitive strategy. But if you accept as axiomatic that you have some goals, and you want to figure out how to further them, then we arrive at this interesting empirical question: what’s the most effective methodology for human decision making? It’s not a contradiction to say “I’m going to rationally decide that the best strategy is not to engage in this kind of metacognition, as all it does is lead me astray”.
I agree with nearly all of what you’re saying up here, about heuristics, metacognition, and whether our rational mind is actually powerful enough to beat our instinctive one in practical situations.
I think the original poster was assuming we have some goals, and then pointing out the many disadvantages of choosing an irrational strategy to get to them.
Why would one choose an irrational strategy? Is it because we’re too stupid to know it was irrational? Sometimes. Perhaps we chose it knowing it was irrational? Sometimes that happens too.
In neither case is it that useful to hear that an irrational strategy isn’t as rational as a rational strategy, and can be rationally expected to have a worse outcome. Either they picked that strategy thinking it was rational, in which case that point is irrelevant, or they picked it thinking it was irrational, in which case they clearly don’t think that rationality is right when it says that rationality is always better.
But this is just unfair. You’re judging rationality according to rational arguments, and so OF COURSE you end up finding that rationality is sooo much better.
I, on the other hand, judge my irrationality on an irrational basis, and find that actually it’s much better to be irrational.
What’s the difference? Of course in response to this question you’re bound to come up with even more rational arguments to be rational, but I don’t see how this gets you any further forward.
I, on the other hand, being irrational, don’t have to argue about this if I don’t want to. What kind of sense would it make to argue rationally about the advantages of irrationality anyway? Surely this is a contradiction in terms? But the nice thing about being irrational is that I can irrationally use rationality from time to time anyway, and then just stop and go back to being irrational again when irrationality is clearly more inspired.
OK—so I’m messing about. But you can’t prove rationality is more rational by rational argument. Well, you can, but it’s irrational in a way as you’re assuming the very thing you’re trying to prove. It’s as example of trying to pick yourself up by your own bootstraps.
A rationality-resister doesn’t mean the same thing with “rationality” as a rationalist does. It’s just something that nerds refer to to humiliate them in debates. Not techniques for constructing a map that fits the territory, etc. They probably don’t even have a concept for that.
Therefore: maybe you can turn rationality-resisters into rationalists if you teach them rationality but don’t call it rationality and don’t attack their beliefs. (Eliezer’s sequences taught rationality without attacking, for example, religion (too often), but a lot of people were probably turned off from reading them by just the word rationality).
If a person were completely irrational, sure, this post wouldn’t convince them. (Of course, they also wouldn’t live for long.) But it never tried to convince completely irrational people, it tried to convince people who were kinda-rational already that investing effort in further improving their rationality would be worth it.
I guess in the context for which it’s intended, it works OK. It’s a book introduction, after all. Most irrational people don’t really have much of a map of where and how they are being irrational—and in fact commonly consider themselves to be very rational persons, when it comes to it. (I strongly suspect this is like driving—most people considering themselves to be above average—where being incompetent also robs you of awareness of your lack of performance...) The common reaction would probably be to nod along, thinking how terrible it is that all these people are so irrational, and enjoy the read. For an introduction, that’s good enough.
I think many of us have considered these ideas before. Eliezer Yudkowsky certainly has.
The fact of the matter is: either you are that crazy that you will be incapable of developing a rationality that works … or you aren’t. If you are, you will lose. If you aren’t, you can probably judge the rationality you have according to the rational arguments you have to develop a better rationality.
Just had a look at what Eliezer said there. I think it’s not quite the same thing as what I’m talking about here. It’s true that if you have in your mind a system of rationality—that leads you in a rational way to improve what you have over time. I agree this works if you have the required intelligence and don’t start with an entirely pathological system of rationality.
Let me give a slightly more concrete example. I had a conversation some time ago regarding homeopathy—that branch of alternative medicine that uses ingredients which have been diluted down by a factor of 10 - in this case 120 times in succession. This results in an overall dilution of 1 in 10^120. Since there are only 10^85 or so atoms in the entire observable universe, this provides a very high degree of certainty that there is none of the active ingredient in the homeopathic bottle that this person swore was highly effective.
Pointing this out had no effect, as you might expect. In fact, the power of the treatment is said to rise as it becomes more dilute. The person absolutely believed in the power of that remedy, even though they agreed with my argument that in fact there were no molecules of the original substance in the bottle. I don’t suppose talking about placebos and hypnotic suggestion would have made any difference either—in fact I believe I did mention the placebo effect. No difference at all.
We’ve all come across stuff like this. My point is that the applicability of rationality is what is at issue in arguments like this. I say it is—they say that in some way it isn’t. My argument stops me from buying the homeopathic remedy, but it is almost irrelevant to the other person because rationality itself is what is at issue.
Wait, are you asking how to convince an irrational human being to be rational?
Sort of. And we all know the answer to that question is that it’s often completely impossible.
Some of the examples in the article are matters where human hardware tends to lead us in the wrong direction. But others—particularly the Albanian case, are to a large extent failures of intent. Good quality rationality is a long term investment that many people choose not to make. The result is vulnerability to believing impossible things. Irrationality is often a choice, and I think that, long term, our failure to be rational springs as much from choosing not to be as much as it does from failures in execution when sincerely trying to be. You can compensate, to a degree, for our hardware based inclinations to see patterns where none exist, or stick with what we have. But nothing compensates for choosing the irrational.
We can all see that irrationality is expensive to varying degrees depending on what you do. But this is only convincing to those of us who are already convinced and don’t need to know. So what was the article intending to do?
So yes—sort of.
Not to sound insufficiently pessimistic, but I don’t think that’s been rigorously established. It doesn’t seem impossible to raise the sanity waterline—it seems more likely that we have inferential distances to cross and armors built to protect false beliefs we must pierce.
I like this comment. Given the historical improvements that have already come about, it can’t be unreasonable to look for more.
I removed the upvote that someone had placed on this because the above reasoning is intuitive but wrong. We can conclude that it is wrong for the same reason and with the same confidence that we can conclude “all X are Y; B is Y; therefore, B is X” is wrong.
To evaluate claims we must use the best tools of reasoning that are available to us. This applies whether or not we are evaluating claims about reasoning itself.
I removed the upvote that someone had placed on this because the above reasoning is intuitive but wrong. We can conclude that it is wrong for the same reason and with the same confidence that we can conclude “all X are Y; B is Y; therefore, B is X” is wrong.
To evaluate claims we must use the best tools of reasoning that are available to us. This applies whether or not we are evaluating claims about reasoning itself.
Splendid. Here is a perfect example of a rational reason to be rational. I did say someone would be bound to come up with that.
I don’t understand your all X is Y comment—I understand the logical fallacy, but not why you think it’s relevant.
You said...”To evaluate claims, we must use the best tools of reasoning that are available to us?” You might, but not everybody does. Why should we always use reasoning? Not everybody does. And some of us mean not to......
I am still messing about—I do believe that rationality is the right thing to do. But some are deadly serious when they say that your use of reason blinds you to the important stuff. My point is that you cannot prove them wrong, for the very applicability of proof on such matters is what is at issue. Obviously they won’t accept that your proof is a good answer. They are right—it is only relevant insofar as argument itself is relevant, which is what you are disagreeing about !
Irrationally, I decided I’d upvote you because your reasoning is just as wrong as mine.
Duncan: see the links in this comment.
To other readers: Can anyone think of the Eliezer post that is on the tip of my tongue? I can’t find the link without recalling the keywords!
Mainly
http://lesswrong.com/lw/k1/no_one_can_exempt_you_from_rationalitys_laws/
but these also seem relevant:
http://lesswrong.com/lw/gr/the_modesty_argument/
http://lesswrong.com/lw/h9/tsuyoku_vs_the_egalitarian_instinct/
It is not easy to escape this problem.
Let me start by trying to summarise Eliezer’s argument—perhaps using slightly different terminology. If I have a given stream of sensory experience, what is the correct interpretation of it? I would say it is that which allows you to compress the stream (and the theory with which you explain it) down to the smallest possible size. You can then use this as a means of making a prediction of what the next bit of the sensory stream might be.
This has quite a few nice features—not least of which is if someone else comes up with a different interpretation of the stream, you can simply weigh it against yours, and if theirs weighs more, that renders it statistically more unlikely, and statistically more likely to give incorrect predictions as well. And weighing compressed data sets is mathematics, and not a matter of opinion. You can reasonably say that their heavier ‘interpretation’ is adding information that you know—from your compression—is not in the stream. Where did this extra information come from? It’s just wrong.
The next question is obvious—how could someone possibly consider themselves to be logically right to do something other than this? And here Eliezer is home and dry—this is the only logically right thing to do. Anyone doing something different is assuming the laws of statistics and reason do not apply to them. In all of this I’m with Eliezer all the way. It’s inductive reasoning, which means we only have expectations about what we have not yet seen, and not certainty. But at least we know that we can’t do better.
All of this is beyond question, and not my point. There is another major choice, which is to disbelieve rationality altogether, or regard it as of limited applicability. Throw it out—baby, bathwater, everything. And trust something else instead. And explicitly believe that this something else is NOT a purely rational means of truth, but something else. This gives you absolute license to impose any number of interpretations on the data. Of course the rationalists are blind—they tell me this data only tells me X, but I can see so much more in it than that! Two and two may be four, rationally, but in fact the whole is more than the sum of its parts. If someone proves that the extra stuff isn’t actually in the data, well fine—I knew that. These things aren’t knowable by the rational mind, one needs divine revelation, or natural intuition, or spiritual sensitivity..… One comes to believe the world is fundamentally not rational, not rationally explainable, not rationally reducible, and certainly not statistically analyzable. Forget all that stuff, and just trust your animal instincts.
And here you end at an impasse. Eliezer, in his article, states that he expects nature to give such irrational persons a few lessons in the school of hard knocks. They are living in a mindset full of confabulations and perceptual artifacts. The irrationalists would see him as living his life with his head under a bucket, restricted to what logic can tell him, and missing out on every other part of his humanity.
Who is right? Rationally, Eliezer. Irrationally, I have no idea—is there even such a thing as ‘right’ in this case? Would I even care? If one denies rationality, one can believe in anything, if believe is the right word for it.
Just to be clear, I do not believe in extra-rational means of knowledge, and I believe rationality to be universally applicable. But I regard this as a belief, as any attempt at proof is begging this question on one side or the other.
It’s not a “rationalist” thing, it’s a human thing. What are you evaluating the adequacy of rituals of cognition with? You’re already what you are, which is what you use. There are no universally convincing arguments, and one accepts, say, Occam’s razor, not because it’s “rational”, but because we are the kind of agents that are compelled by this principle. Don’t distinguish between “rational” and “magical”, ask what moves you, on reflection, what do you believe to get you the results, and whether you believe the argument for why it does.
Links:
http://lesswrong.com/lw/rn/no_universally_compelling_arguments/
http://lesswrong.com/lw/hk/priors_as_mathematical_objects/
http://lesswrong.com/lw/o5/the_second_law_of_thermodynamics_and_engines_of/
http://wiki.lesswrong.com/wiki/Futility_of_chaos
Believe it or not Vladmir, Eliezer and I all understand the limitations of thought, the dependence on initial priors. Searching for “anti-inductive” will get you some hits. That we still claim that you need to use every resource you have at your disposal to evaluate your resources is significant.
Eliezer, no, no.
There is one line of reasoning that I find is actually more effective on irrational people than rational ones. Argumentum ad baculum.
Where Recursive Justification Hits Bottom
You’re starting one premise back from where the post is jumping off from.
The post assumes as a premise that we have some goals, and there’s an empirical question about what cognitive strategies will best achieve those goals.
One strategy is to use our intuitions. We have some built in software for doing quasi-Bayesian analysis, and our heuristics perform to an acceptable standard in many contexts.
The other strategy is to use more formal analysis. The post argues for this second strategy, pointing out predictable failure points for our intuition heuristics.
Which one maximises your chances of achieving your goals is an empirical question. It’s possible that as bad as our heuristics are, we’re just incapable of the metacognition to do it formally. Maybe all we end up doing is giving ourselves tools for rationalisation. Most of the people in this community don’t believe that, but it’s not a philosophical question.
You go one premise back, to the point where we’re choosing a strategy. Sure, you can reject rationality altogether. Then it doesn’t make sense to talk about deciding on a cognitive strategy. But if you accept as axiomatic that you have some goals, and you want to figure out how to further them, then we arrive at this interesting empirical question: what’s the most effective methodology for human decision making? It’s not a contradiction to say “I’m going to rationally decide that the best strategy is not to engage in this kind of metacognition, as all it does is lead me astray”.
I agree with nearly all of what you’re saying up here, about heuristics, metacognition, and whether our rational mind is actually powerful enough to beat our instinctive one in practical situations.
I think the original poster was assuming we have some goals, and then pointing out the many disadvantages of choosing an irrational strategy to get to them.
Why would one choose an irrational strategy? Is it because we’re too stupid to know it was irrational? Sometimes. Perhaps we chose it knowing it was irrational? Sometimes that happens too.
In neither case is it that useful to hear that an irrational strategy isn’t as rational as a rational strategy, and can be rationally expected to have a worse outcome. Either they picked that strategy thinking it was rational, in which case that point is irrelevant, or they picked it thinking it was irrational, in which case they clearly don’t think that rationality is right when it says that rationality is always better.