Well, yes. But they are less likely than their conjuncts in a specific and mathematical way, and we have good evidence that people don’t multiply their uncertainties the way they should—it appears that they simply take the average (!!!).
Charitably, I count eight conjunctions in the presented argument. If he had on average 80% confidence in each premise (raising awareness of the free market virtues will overcome status quo bias, increase in free market in first world will translate to increase in free market in the third world—these don’t feel like four-in-five-timers), then his plan, as stated, has at most a 17% chance of success. But Jim feels like he has an 80% chance.
Your response is true in a trivial way, because 17% is far higher than the chance Zeus returns, and far higher again than Zeus and Jesus returning to give each other a cosmic high-five. But we can spot those very unlikely premises—and it’s only the very unlikely premises that are less likely than a long list of conjunctions. We don’t think like that—we don’t see our true chances.
So, if you restrict the space of premises and arguments to what humans mostly deal with in their practical lives, “conjunctions are inherently unlikely” is an excellent rule of thumb until you can sit down and do the math.
What you write is true. But I have seen people go the other way—hear about some problem (such as the Conjunction Fallacy), then start over-compensating for it (for example, by always rating conjunctions as lower probability). Since the post as written wasn’t entirely clear about the limits, I was just pointing out that automatically down-rating conjunctions is not always advisable.
I never had any problems remembering to multiply the probabilities once it was pointed out, partly because I had already had experience at calculating complicated reliability problems, which are structurally almost identical.
I never had any problems remembering to multiply the probabilities once it was pointed out, partly because I had already had experience at calculating complicated reliability problems, which are structurally almost identical.
That is a good grounding for applying the conjunction fallacy! Even half a second spent deciding whether your argument is ‘reliable’ according to methods you have for estimating reliability might stop you from motivated cognition in the direction of “my argument is right”. Makes me wonder what other real-life problems have a similar enough structure to common biases to help with instrumental rationality.
Well, yes. But they are less likely than their conjuncts in a specific and mathematical way, and we have good evidence that people don’t multiply their uncertainties the way they should—it appears that they simply take the average (!!!).
Charitably, I count eight conjunctions in the presented argument. If he had on average 80% confidence in each premise (raising awareness of the free market virtues will overcome status quo bias, increase in free market in first world will translate to increase in free market in the third world—these don’t feel like four-in-five-timers), then his plan, as stated, has at most a 17% chance of success. But Jim feels like he has an 80% chance.
Your response is true in a trivial way, because 17% is far higher than the chance Zeus returns, and far higher again than Zeus and Jesus returning to give each other a cosmic high-five. But we can spot those very unlikely premises—and it’s only the very unlikely premises that are less likely than a long list of conjunctions. We don’t think like that—we don’t see our true chances.
So, if you restrict the space of premises and arguments to what humans mostly deal with in their practical lives, “conjunctions are inherently unlikely” is an excellent rule of thumb until you can sit down and do the math.
What you write is true. But I have seen people go the other way—hear about some problem (such as the Conjunction Fallacy), then start over-compensating for it (for example, by always rating conjunctions as lower probability). Since the post as written wasn’t entirely clear about the limits, I was just pointing out that automatically down-rating conjunctions is not always advisable.
I never had any problems remembering to multiply the probabilities once it was pointed out, partly because I had already had experience at calculating complicated reliability problems, which are structurally almost identical.
That is a good grounding for applying the conjunction fallacy! Even half a second spent deciding whether your argument is ‘reliable’ according to methods you have for estimating reliability might stop you from motivated cognition in the direction of “my argument is right”. Makes me wonder what other real-life problems have a similar enough structure to common biases to help with instrumental rationality.