What you write is true. But I have seen people go the other way—hear about some problem (such as the Conjunction Fallacy), then start over-compensating for it (for example, by always rating conjunctions as lower probability). Since the post as written wasn’t entirely clear about the limits, I was just pointing out that automatically down-rating conjunctions is not always advisable.
I never had any problems remembering to multiply the probabilities once it was pointed out, partly because I had already had experience at calculating complicated reliability problems, which are structurally almost identical.
I never had any problems remembering to multiply the probabilities once it was pointed out, partly because I had already had experience at calculating complicated reliability problems, which are structurally almost identical.
That is a good grounding for applying the conjunction fallacy! Even half a second spent deciding whether your argument is ‘reliable’ according to methods you have for estimating reliability might stop you from motivated cognition in the direction of “my argument is right”. Makes me wonder what other real-life problems have a similar enough structure to common biases to help with instrumental rationality.
What you write is true. But I have seen people go the other way—hear about some problem (such as the Conjunction Fallacy), then start over-compensating for it (for example, by always rating conjunctions as lower probability). Since the post as written wasn’t entirely clear about the limits, I was just pointing out that automatically down-rating conjunctions is not always advisable.
I never had any problems remembering to multiply the probabilities once it was pointed out, partly because I had already had experience at calculating complicated reliability problems, which are structurally almost identical.
That is a good grounding for applying the conjunction fallacy! Even half a second spent deciding whether your argument is ‘reliable’ according to methods you have for estimating reliability might stop you from motivated cognition in the direction of “my argument is right”. Makes me wonder what other real-life problems have a similar enough structure to common biases to help with instrumental rationality.