Being a believer in X inherently means, for a rationalist, that you think there are no good arguments against X. So this should be impossible, except by deliberately including arguments that are, to the best of your knowledge, flawed. I might be able to imitate a homeopath, but I can’t imitate a rational, educated, homeopath, because if I thought there was such a thing I would be a homeopath.
Yes, a lot of people extoll the virtues of doing this. But a lot of people aren’t rational, and don’t believe X on the basis of arguments in the first place. If so, then producing good arguments against X are logically possible, and may even be helpful.
(There’s another possibility: where you are weighing things and the other side weighs them differently from you. But that’s technically just a subcase—you still think the other side’s weights are incorrect—and I still couldn’t use it to imitate a creationist or flat-earther.)
Being a believer in X inherently means, for a rationalist, that you think there are no good arguments against X.
Huh? You are proposing a very stark, black-and-white, all-or-nothing position. Recall that for a rationalist a belief has a probability associated with it. It doesn’t have to be anywhere near 1. Moreover, a rationalist can “believe” (say, with probability > 90%) something against which good arguments exist. It just so happens that the arguments pro are better and more numerous than the arguments con. That does not mean that the arguments con are not good or do not exist.
And, of course, you should not think yourself omniscient. One of the benefits of steelmanning is that it acquaints you with the counterarguments. Would you know what they are if you didn’t look?
I might be able to imitate a homeopath, but I can’t imitate a rational, educated, homeopath, because if I thought there was such a thing I would be a homeopath.
Great point!
I guess the point of ITT is that even when you disagree with your opponents, you have the ability to see their (wrong) model of the world exactly as they have it, as opposed to a strawman.
For example, if your opponent believes that 2+2=5, you pass ITT by saying “2+2=5”, but you fail it by saying “2+2=7″. From your perspective, both results are “equally wrong”, but from their perspective, the former is correct, while the latter is plainly wrong.
In other words, the goal of ITT isn’t to develop a “different, but equally correct” map of the territory (because if you would believe in correctness of the opponent’s map, it would also become your map), but to develop a correct map of your opponent’s map (as opposed to an incorrect map of your opponent’s map).
So, on some level, while you pass an ITT, you know you are saying something false or misleading; even if just by taking correct arguments and assigning incorrect weights to them. But the goal isn’t to derive a correct “alternative truth”; it is to have a good model of your opponent’s mind.
Being a believer in X inherently means, for a rationalist, that you think there are no good arguments against X. So this should be impossible, except by deliberately including arguments that are, to the best of your knowledge, flawed. I might be able to imitate a homeopath, but I can’t imitate a rational, educated, homeopath, because if I thought there was such a thing I would be a homeopath.
Yes, a lot of people extoll the virtues of doing this. But a lot of people aren’t rational, and don’t believe X on the basis of arguments in the first place. If so, then producing good arguments against X are logically possible, and may even be helpful.
(There’s another possibility: where you are weighing things and the other side weighs them differently from you. But that’s technically just a subcase—you still think the other side’s weights are incorrect—and I still couldn’t use it to imitate a creationist or flat-earther.)
Huh? You are proposing a very stark, black-and-white, all-or-nothing position. Recall that for a rationalist a belief has a probability associated with it. It doesn’t have to be anywhere near 1. Moreover, a rationalist can “believe” (say, with probability > 90%) something against which good arguments exist. It just so happens that the arguments pro are better and more numerous than the arguments con. That does not mean that the arguments con are not good or do not exist.
And, of course, you should not think yourself omniscient. One of the benefits of steelmanning is that it acquaints you with the counterarguments. Would you know what they are if you didn’t look?
Great point!
I guess the point of ITT is that even when you disagree with your opponents, you have the ability to see their (wrong) model of the world exactly as they have it, as opposed to a strawman.
For example, if your opponent believes that 2+2=5, you pass ITT by saying “2+2=5”, but you fail it by saying “2+2=7″. From your perspective, both results are “equally wrong”, but from their perspective, the former is correct, while the latter is plainly wrong.
In other words, the goal of ITT isn’t to develop a “different, but equally correct” map of the territory (because if you would believe in correctness of the opponent’s map, it would also become your map), but to develop a correct map of your opponent’s map (as opposed to an incorrect map of your opponent’s map).
So, on some level, while you pass an ITT, you know you are saying something false or misleading; even if just by taking correct arguments and assigning incorrect weights to them. But the goal isn’t to derive a correct “alternative truth”; it is to have a good model of your opponent’s mind.
No good arguments, or the weight of the arguments for X are greater than the weight of the arguments against X?
You know, I did mention weighing arguments in my post.
No, http://lesswrong.com/lw/gz/policy_debates_should_not_appear_onesided/
In high level debating at the debating world championship the participants are generally able to give good arguments for both sides of every issue.