The problem is not with ‘rationalization’. Many mathematical proofs started with a [unfounded] belief that some conjecture is true, yet are perfectly valid as the belief has been ‘rationalized’ using solid logic.
The problem is faulty logic; if your logic is even a small bit off on every inference step, then you can steer the chain of reasoning towards any outcome. When you are using faulty logic and rationalizing, you are steering into some outcome that you want. When you are using faulty logic and you are actually thinking what is true, then you just accumulate error like a random walk, which gives much smaller error over time.
Other issue—most typically people who are rationalizing are not the slightest bit interested in catching themselves rationalize.
edit: to clarify. You may have a goal of winning a debate, not caring what is the true answer, and come up with an entirely valid proof, if for bad reasons. You may also have a goal of winning a debate, be wrong, and make up some fallacious argument, neglect to update your belief, et cetera. That happens when you are not restricting yourself to arguments that are correct. Or you may have a goal of winning a debate, be wrong, and fail to make an argument because you are successfully restricting yourself to arguments which are correct, and don’t use fallacies to argue for what you believe in anyway.
In mathematics, the reasoning is fairly reliable, and it doesn’t make a slightest bit of difference if you are arriving at a proof because you wanted to know if conjecture is really true, or because you wanted to humiliate some colleague you hate, or because you wanted not to lose debate and didn’t want to admit you’re wrong. With unreliable reasoning, on the other hand, you are producing mistakes whenever you are rationalizing or not, albeit when rationalizing you tend to make larger mistakes, or become a mistake factory. Still, you may start off with good intention to find out if some conjecture is true, and end up making a faulty proof, or you may start off with a very strong very ill founded belief about the conjecture and get lucky to be right, and find a valid proof. You can’t always trust the arguments that you arrived at without rationalizing more than the ones you arrived at when rationalizing.
I’ve noticed my ability to predict my comment karma has gone down in the last few months, the weirdest example being some of my comments fluctuating about ten to fifteen points within a single day, prompting accusations that I had a bunch of sockpuppets and was manipulating the votes. It’s weird and I don’t have any good explanation for why LW would suddenly start voting in weird ways. As the site gets bigger the median intelligence tends to drop; does anyone know if there’s been a surge in registration lately? It’s true I’ve seen an abnormal amount of activity on the “Welcome to Less Wrong” post. I feel like we might be attracting more people that aren’t high IQ, autistic, schizotypal, or OCPD, which would be sad.
I generally treat the votes as a mix of indication of clarity and of how much people like/dislike the notion.
Didn’t expect any negatives on this post though; then I assumed something wasn’t clear and added the edit. I still think that my point is entirely valid though. Good logic does not allow you to make a fallacious argument even if you want to, and bad logic often leads you to wrong conclusions even when you are genuinely interested in knowing the answer; furthermore the rationalization process doesn’t restrict itself to assertions that are false.
Perhaps the concern is that the reduction to a logical question, i.e. to the set of premises and axioms, was the faulty part? After all a valid argument from false premises doesn’t help anyone, but I’m sure you know that.
The problem is not with ‘rationalization’. Many mathematical proofs started with a [unfounded] belief that some conjecture is true, yet are perfectly valid as the belief has been ‘rationalized’ using solid logic.
The problem is faulty logic; if your logic is even a small bit off on every inference step, then you can steer the chain of reasoning towards any outcome. When you are using faulty logic and rationalizing, you are steering into some outcome that you want. When you are using faulty logic and you are actually thinking what is true, then you just accumulate error like a random walk, which gives much smaller error over time.
Other issue—most typically people who are rationalizing are not the slightest bit interested in catching themselves rationalize.
edit: to clarify. You may have a goal of winning a debate, not caring what is the true answer, and come up with an entirely valid proof, if for bad reasons. You may also have a goal of winning a debate, be wrong, and make up some fallacious argument, neglect to update your belief, et cetera. That happens when you are not restricting yourself to arguments that are correct. Or you may have a goal of winning a debate, be wrong, and fail to make an argument because you are successfully restricting yourself to arguments which are correct, and don’t use fallacies to argue for what you believe in anyway.
In mathematics, the reasoning is fairly reliable, and it doesn’t make a slightest bit of difference if you are arriving at a proof because you wanted to know if conjecture is really true, or because you wanted to humiliate some colleague you hate, or because you wanted not to lose debate and didn’t want to admit you’re wrong. With unreliable reasoning, on the other hand, you are producing mistakes whenever you are rationalizing or not, albeit when rationalizing you tend to make larger mistakes, or become a mistake factory. Still, you may start off with good intention to find out if some conjecture is true, and end up making a faulty proof, or you may start off with a very strong very ill founded belief about the conjecture and get lucky to be right, and find a valid proof. You can’t always trust the arguments that you arrived at without rationalizing more than the ones you arrived at when rationalizing.
Why is this getting down voted?
I’ve noticed my ability to predict my comment karma has gone down in the last few months, the weirdest example being some of my comments fluctuating about ten to fifteen points within a single day, prompting accusations that I had a bunch of sockpuppets and was manipulating the votes. It’s weird and I don’t have any good explanation for why LW would suddenly start voting in weird ways. As the site gets bigger the median intelligence tends to drop; does anyone know if there’s been a surge in registration lately? It’s true I’ve seen an abnormal amount of activity on the “Welcome to Less Wrong” post. I feel like we might be attracting more people that aren’t high IQ, autistic, schizotypal, or OCPD, which would be sad.
I generally treat the votes as a mix of indication of clarity and of how much people like/dislike the notion.
Didn’t expect any negatives on this post though; then I assumed something wasn’t clear and added the edit. I still think that my point is entirely valid though. Good logic does not allow you to make a fallacious argument even if you want to, and bad logic often leads you to wrong conclusions even when you are genuinely interested in knowing the answer; furthermore the rationalization process doesn’t restrict itself to assertions that are false.
Perhaps the concern is that the reduction to a logical question, i.e. to the set of premises and axioms, was the faulty part? After all a valid argument from false premises doesn’t help anyone, but I’m sure you know that.
Because someone’s rationalizing something maybe ;)