There are special cases. Maybe the links are all equally strong to high precision. But that’s unusual. Variance (statistical fluctuations) is usual. Perhaps there is a bell curve of link strengths. Having two links approximately tied for weakest is more realistic, though still uncommon.
There is another case which your argument neglects, which can make weakest-link reasoning highly inaccurate, and which is less of a special case than a tie in link-strength.
The way you are reasoning about systems of interconnected ideas is conjunctive: every individual thing needs to be true. But some things are disjunctive: some one thing needs to be true. (Of course there are even more exotic logical connectives, such as implication or XOR, which are also used in everyday reasoning. But for now it will do to consider only conjunction and disjunction.)
A conjunction of a number of statements is—at most—as strong as its weakest element, as you suggest. However, a disjunction of a number of statements is—at worst—as strong as its strongest element.
Following the venerated method of multiple working hypotheses, then, we are well-advised to come up with as many hypotheses as we can to explain the data. Doing this has many advantages, but one worth mentioning is that it increases the probability that one of our ideas is right.
Since we don’t want to just believe everything, we then have to somehow weigh the different hypotheses against each other. Our method of weighing hypotheses should ideally have the property that, if the correct hypothesis is in our collection, we would eventually converge to it. (Having established that, we might ask to converge as quickly as possible; and we could also name many other desirable properties of such a weighting scheme.)
Concerning such weighting schemes, you comment:
Don’t update your beliefs using evidence, increasing your confidence in some ideas, when the evidence deals with non-bottlenecks. In other words, don’t add more plausibility to an idea when you improve a sub-component that already had excess capacity. Don’t evaluate the quality of all the components of an idea and combine them into a weighted average which comes out higher when there’s more excess capacity for non-bottlenecks.
I can agree that a weighted average is not what is called for in a conjunctive case. It is closer to correct in a disjunctive case. But let us focus on your claim that we should ignore non-bottlenecks. You already praised ideas for having excess capacity, IE, being more accurate than needed for a given application. But now you are criticizing a practice of weighing evidence too accurately.
I previously agreed that a conjunction has strength at most that of its weakest element. But a probabilistic account gives us more accuracy than that, allowing us to compute a more precise strength (if we have need of it). And similarly, a disjunction has strength at least that of its weakest element—but a probabilistic account gives us more accuracy than that, if we need it. Quoting you:
Excess capacity means the idea is more than adequate to accomplish its purpose. The idea is more powerful than necessary to do its job. This lets it deal with variance, and may help with using the idea for other jobs.
Perhaps the excess accuracy in probability theory makes it more powerful than necessary to do its job? Perhaps this helps it deal with variance? Perhaps it helps the idea apply for other jobs than the one it was meant for?
There is another case which your argument neglects, which can make weakest-link reasoning highly inaccurate, and which is less of a special case than a tie in link-strength.
The way you are reasoning about systems of interconnected ideas is conjunctive: every individual thing needs to be true. But some things are disjunctive: some one thing needs to be true. (Of course there are even more exotic logical connectives, such as implication or XOR, which are also used in everyday reasoning. But for now it will do to consider only conjunction and disjunction.)
A conjunction of a number of statements is—at most—as strong as its weakest element, as you suggest. However, a disjunction of a number of statements is—at worst—as strong as its strongest element.
Following the venerated method of multiple working hypotheses, then, we are well-advised to come up with as many hypotheses as we can to explain the data. Doing this has many advantages, but one worth mentioning is that it increases the probability that one of our ideas is right.
Since we don’t want to just believe everything, we then have to somehow weigh the different hypotheses against each other. Our method of weighing hypotheses should ideally have the property that, if the correct hypothesis is in our collection, we would eventually converge to it. (Having established that, we might ask to converge as quickly as possible; and we could also name many other desirable properties of such a weighting scheme.)
Concerning such weighting schemes, you comment:
I can agree that a weighted average is not what is called for in a conjunctive case. It is closer to correct in a disjunctive case. But let us focus on your claim that we should ignore non-bottlenecks. You already praised ideas for having excess capacity, IE, being more accurate than needed for a given application. But now you are criticizing a practice of weighing evidence too accurately.
I previously agreed that a conjunction has strength at most that of its weakest element. But a probabilistic account gives us more accuracy than that, allowing us to compute a more precise strength (if we have need of it). And similarly, a disjunction has strength at least that of its weakest element—but a probabilistic account gives us more accuracy than that, if we need it. Quoting you:
Perhaps the excess accuracy in probability theory makes it more powerful than necessary to do its job? Perhaps this helps it deal with variance? Perhaps it helps the idea apply for other jobs than the one it was meant for?
I replied at https://www.lesswrong.com/posts/qEkX5Ffxw5pb3JKzD/comment-replies-for-chains-bottlenecks-and-optimization