An incorrect inference may, under rare circumstances, lead to a useful and coincidentally correct conclusion that just happens to be hard to reach without the incorrect inference. (Of course, it requires being very lucky as well. People make incorrect inferences all the time, but there are very few well-known examples of it working).
For example, viewing heat as a liquid might result in designing a stove that transfers heat to a pot very well, despite heat not being a liquid.
An incorrect inference can be considered basically random—this can, I think, be imitated more efficiently and more reeliably in many cases by considering an evolutionary-algorithm approach to design, in short making random design choices and testing them extremely quickly. This also has the advantage that instead of needing to be lucky enough to hit the right inference first, it can keep going until a suitable design is achieved.
I don’t really know if this counts as an intrinsic weakness...
An incorrect inference may, under rare circumstances, lead to a useful and coincidentally correct conclusion that just happens to be hard to reach without the incorrect inference. (Of course, it requires being very lucky as well. People make incorrect inferences all the time, but there are very few well-known examples of it working).
For example, viewing heat as a liquid might result in designing a stove that transfers heat to a pot very well, despite heat not being a liquid.
An incorrect inference can be considered basically random—this can, I think, be imitated more efficiently and more reeliably in many cases by considering an evolutionary-algorithm approach to design, in short making random design choices and testing them extremely quickly. This also has the advantage that instead of needing to be lucky enough to hit the right inference first, it can keep going until a suitable design is achieved.
I don’t really know if this counts as an intrinsic weakness...