I understand the idea of the “bottom line” post a little distinctly. In my understanding it doesn’t address the process of arguing (i.e. constructing verbal expressions capable of persuading others). Building an effective argument needs knowing the goal as a prerequisite, obviously. But the situation is different in the private, deciding what to believe. Quite commonly one selects one’s belief on a totally inadequate basis (affect heuristics, political sympathies) and then reinforces this belief by arguments constructed with the belief in mind. This is what the post was warning against.
In the analogy with mathematical proofs: if a mathematician is reasonably certain that a theorem holds, he can go and try to find a proof. The proof is an argument presented to the public (here, other mathematicians) and should be clear, elegant and polished. But before that the mathematician must decide which theorem he should try to prove, and it would be a mistake to skip this phase, just formulate a “random” theorem and directly jump to the phase of constructing a proof. In mathematics it would be hard to succeed this way since to decide whether a proof is correct or not is relatively easy and straightforward, but outside mathematics the bottom-line approach is usually feasible and costly.
Your division of reasoning into three steps (guessing conclusion, justifying it, checking the justification) may be inevitable for small irreducible ideas where you can go through the whole process in few minutes. But most arguments are about complex hypotheses whose justification could be (and usually is) reduced to a chain of elementary inductive steps. For such hypotheses it is certainly feasible (psychologically or otherwise) to arrive at them gradually—guessing and rationalising the irreducible bits which can be easily checked, but not the hypothesis as a whole.
Quite commonly one selects one’s belief on a totally inadequate basis (affect heuristics, political sympathies) and then reinforces this belief by arguments constructed with the belief in mind. This is what the post was warning against.
That’s an application of the post’s argument, true. But as gRR notes, the literal meaning of the post discusses how we judge information presented to us by other people, which we receive complete with arguments and conclusions.
Once an argument is given in favor of a belief, and that argument has no logical faults, we must update our beliefs accordingly. We don’t have a choice to ignore a valid argument, if we are Bayesians. Even if the argument was deliberately built by someone trying to convince us, who is prone to biases, etc.
Yes, filtered evidence can in the extreme convince us of anything. Someone who controls all our incoming (true) information, and can filter but not modify it, can sometimes influence us to give believe anything they want. But the answer is not to discard information selected by non-objective partisans of beliefs. That would make us discard almost all information we receive at second hand. Instead, the answer is to try and collect information from partisans of different conflicting ideas, and to do confirmations ourselves or via trusted associates.
But as gRR notes, the literal meaning of the post discusses how we judge information presented to us by other people, which we receive complete with arguments and conclusions.
The bottom line of the EY’s post says:
This is intended as a caution for your own thinking, not a Fully General Counterargument against conclusions you don’t like. For it is indeed a clever argument to say “My opponent is a clever arguer”, if you are paying yourself to retain whatever beliefs you had at the start.
So I don’t think the post literally means what you think it means.
That part was apparently added a bit later, when he posted What Evidence Filtered Evidence.
It cautions people against interpreting the entire preceding post in this literal way. Presumably it was added because people did interpret it so, and gRR’s reading is not novel or unique.
Of course this reading is wrong—as it applies to reality, and as a description of Eliezer’s beliefs. But it’s right—as it applies to the post: it is a plausible literal meaning. It wasn’t the intention of the writer, but if some people understand it this way, then it’s the text’s fault (so to speak), no the readers’. There is no “true” literal meaning to a text other than what people understand from it.
I understand the idea of the “bottom line” post a little distinctly. In my understanding it doesn’t address the process of arguing (i.e. constructing verbal expressions capable of persuading others).
I have a general objection against this interpretation—it throws away the literal meaning of the EY’s post.
But there is also a pragmatic difference, about where to direct the focus of attention when one tries to de-bias one’s reasoning. With the three steps as I stated them, I know that I cannot really fix the step 1, beyond trying to catch myself before I commit, as lincolnquirk suggested. Step 2 is comparatively harmless, so it’s the step 3 where I must put the real defense.
But most arguments are about complex hypotheses whose justification could be (and usually is) reduced to a chain of elementary inductive steps. For such hypotheses it is certainly feasible (psychologically or otherwise) to arrive at them gradually—guessing and rationalising the irreducible bits which can be easily checked, but not the hypothesis as a whole.
Could you mention specific examples of such complex hypotheses? I mean, where it would make sense to know the conclusion in advance, and yet the conclusion would not be reachable in a single intuitive leap. It seems contradictory.
I have a general objection against this interpretation—it throws away the literal meaning of the EY’s post.
The literal meaning of the post, if any, is: no matter of carefully crafted post-hoc justification is going to make your conclusion correct. I don’t think your interpretation is closer to it than mine.
Could you mention specific examples of such complex hypotheses? I mean, where it would make sense to know the conclusion in advance, and yet the conclusion would not be reachable in a single intuitive leap.
I am not sure what you mean by “making sense to know the conclusion in advance” and “reachable in a single intuitive leap”. I am thinking of questions whose valid justification is not irreducible—either it is a chain of reasoning or it consists of independent pieces of evidence—just as:
Does God exist? Does global warming happen? Why did the non-avian dinosaurs become extinct? Is the millionth decimal digit of pi 8? Who is the best candidate for the upcoming presidential elections in Nicaragua?
Most questions I can think of now are like that, so there is probably some misunderstanding.
I understand the idea of the “bottom line” post a little distinctly. In my understanding it doesn’t address the process of arguing (i.e. constructing verbal expressions capable of persuading others). Building an effective argument needs knowing the goal as a prerequisite, obviously. But the situation is different in the private, deciding what to believe. Quite commonly one selects one’s belief on a totally inadequate basis (affect heuristics, political sympathies) and then reinforces this belief by arguments constructed with the belief in mind. This is what the post was warning against.
In the analogy with mathematical proofs: if a mathematician is reasonably certain that a theorem holds, he can go and try to find a proof. The proof is an argument presented to the public (here, other mathematicians) and should be clear, elegant and polished. But before that the mathematician must decide which theorem he should try to prove, and it would be a mistake to skip this phase, just formulate a “random” theorem and directly jump to the phase of constructing a proof. In mathematics it would be hard to succeed this way since to decide whether a proof is correct or not is relatively easy and straightforward, but outside mathematics the bottom-line approach is usually feasible and costly.
Your division of reasoning into three steps (guessing conclusion, justifying it, checking the justification) may be inevitable for small irreducible ideas where you can go through the whole process in few minutes. But most arguments are about complex hypotheses whose justification could be (and usually is) reduced to a chain of elementary inductive steps. For such hypotheses it is certainly feasible (psychologically or otherwise) to arrive at them gradually—guessing and rationalising the irreducible bits which can be easily checked, but not the hypothesis as a whole.
That’s an application of the post’s argument, true. But as gRR notes, the literal meaning of the post discusses how we judge information presented to us by other people, which we receive complete with arguments and conclusions.
Once an argument is given in favor of a belief, and that argument has no logical faults, we must update our beliefs accordingly. We don’t have a choice to ignore a valid argument, if we are Bayesians. Even if the argument was deliberately built by someone trying to convince us, who is prone to biases, etc.
Yes, filtered evidence can in the extreme convince us of anything. Someone who controls all our incoming (true) information, and can filter but not modify it, can sometimes influence us to give believe anything they want. But the answer is not to discard information selected by non-objective partisans of beliefs. That would make us discard almost all information we receive at second hand. Instead, the answer is to try and collect information from partisans of different conflicting ideas, and to do confirmations ourselves or via trusted associates.
Eliezers’s followup post discusses this.
The bottom line of the EY’s post says:
So I don’t think the post literally means what you think it means.
That part was apparently added a bit later, when he posted What Evidence Filtered Evidence.
It cautions people against interpreting the entire preceding post in this literal way. Presumably it was added because people did interpret it so, and gRR’s reading is not novel or unique.
Of course this reading is wrong—as it applies to reality, and as a description of Eliezer’s beliefs. But it’s right—as it applies to the post: it is a plausible literal meaning. It wasn’t the intention of the writer, but if some people understand it this way, then it’s the text’s fault (so to speak), no the readers’. There is no “true” literal meaning to a text other than what people understand from it.
I have a general objection against this interpretation—it throws away the literal meaning of the EY’s post.
But there is also a pragmatic difference, about where to direct the focus of attention when one tries to de-bias one’s reasoning. With the three steps as I stated them, I know that I cannot really fix the step 1, beyond trying to catch myself before I commit, as lincolnquirk suggested. Step 2 is comparatively harmless, so it’s the step 3 where I must put the real defense.
Could you mention specific examples of such complex hypotheses? I mean, where it would make sense to know the conclusion in advance, and yet the conclusion would not be reachable in a single intuitive leap. It seems contradictory.
The literal meaning of the post, if any, is: no matter of carefully crafted post-hoc justification is going to make your conclusion correct. I don’t think your interpretation is closer to it than mine.
I am not sure what you mean by “making sense to know the conclusion in advance” and “reachable in a single intuitive leap”. I am thinking of questions whose valid justification is not irreducible—either it is a chain of reasoning or it consists of independent pieces of evidence—just as:
Does God exist? Does global warming happen? Why did the non-avian dinosaurs become extinct? Is the millionth decimal digit of pi 8? Who is the best candidate for the upcoming presidential elections in Nicaragua?
Most questions I can think of now are like that, so there is probably some misunderstanding.