Yes, I agree with you here. It looks to me like one of the core values of the community revolves around first evaluating each individual belief for its rationality, as opposed to evaluating the individual. And this seems very sensible to me—given how compartmentalized brains can be, and how rationality in one individual can vary over time.
Also, I am amused by the parallels between this core value, and one of the core principles of computer security in the context of banking transactions. As Scheiner describes it, evaluate the transaction not the end user
first evaluating each individual belief for its rationality Again, no, I’m afraid you’re still making the same mistake. When you talk about evaluating a belief for its rationality, it still sounds like the mindset where you’re trying to work out if the necessary duty has been done to the rationality dance, so that a belief may be allowed in polite society. But our first concern should be: is this true? Does this map match the territory? And rationality is whatever systematically tends to improve the accuracy of your map. If you fail to achieve a correct answer, it is futile to protest that you acted with propriety.
“Rational” is a systematic process for arriving at true beliefs (or high-scoring probability distributions), so if you want true beliefs, you’ll think in the ways you think are “rational”. But even in the very best case, you’ll hit on 10% probabilities one time out of ten.
I didn’t see anything wrong with your original comment, though; it’s possible that Ciphergoth is trying to correct a mistake that isn’t there.
Well, if you got a very improbable result from a body of data; I could see this happening. For example, if most of a group given a medication improved significantly over the control group, but the sample size wasn’t large enough and the improvement was actually coincidence, then it would be rational to believe that it’s an effective medication… but it wouldn’t be true.
Then again, we should only have as much confidence in our proposition as there is evidence for it, so we’d include a whatever-percent possibility of coincidence. I didn’t see anything wrong with your original comment, either.
I’ve since learned that some people use the word “rationality” to mean “skills we use to win arguments and convince people to take our point of view to be true”, as opposed to the definition which I’ve come to expect on this site (currently, on an overly poetic whim, I’d summarize it as “a meta-recursively applied, optimized, truth-finding and decision making process”—actual definition here).
Yes, I agree with you here. It looks to me like one of the core values of the community revolves around first evaluating each individual belief for its rationality, as opposed to evaluating the individual. And this seems very sensible to me—given how compartmentalized brains can be, and how rationality in one individual can vary over time.
Also, I am amused by the parallels between this core value, and one of the core principles of computer security in the context of banking transactions. As Scheiner describes it, evaluate the transaction not the end user
first evaluating each individual belief for its rationality Again, no, I’m afraid you’re still making the same mistake. When you talk about evaluating a belief for its rationality, it still sounds like the mindset where you’re trying to work out if the necessary duty has been done to the rationality dance, so that a belief may be allowed in polite society. But our first concern should be: is this true? Does this map match the territory? And rationality is whatever systematically tends to improve the accuracy of your map. If you fail to achieve a correct answer, it is futile to protest that you acted with propriety.
Now I am really confused. How can a belief be rational, and not true?
“Rational” is a systematic process for arriving at true beliefs (or high-scoring probability distributions), so if you want true beliefs, you’ll think in the ways you think are “rational”. But even in the very best case, you’ll hit on 10% probabilities one time out of ten.
I didn’t see anything wrong with your original comment, though; it’s possible that Ciphergoth is trying to correct a mistake that isn’t there.
Well, if you got a very improbable result from a body of data; I could see this happening. For example, if most of a group given a medication improved significantly over the control group, but the sample size wasn’t large enough and the improvement was actually coincidence, then it would be rational to believe that it’s an effective medication… but it wouldn’t be true.
Then again, we should only have as much confidence in our proposition as there is evidence for it, so we’d include a whatever-percent possibility of coincidence. I didn’t see anything wrong with your original comment, either.
I’ve since learned that some people use the word “rationality” to mean “skills we use to win arguments and convince people to take our point of view to be true”, as opposed to the definition which I’ve come to expect on this site (currently, on an overly poetic whim, I’d summarize it as “a meta-recursively applied, optimized, truth-finding and decision making process”—actual definition here).