Someone picks a questionable ontology for modeling biological organisms/neural nets—for concreteness, let’s say they try to represent some system as a decision tree.
Lo and behold, this poor choice of ontology doesn’t work very well; the modeler requires a huge amount of complexity to decently represent the real-world system in their poorly-chosen ontology. For instance, maybe they need a ridiculously large decision tree or random forest to represent a neural net to decent precision.
This drove me crazy in cognitive science. There was a huge wave of Bayesian models of cognition in the late 2000’s/2010’s, which was partially motivated by the simplicity and generality of it (you can formulate any learning task this way). But then they swept all of the complexity into the priors and likelihood! “Look at our simple model of word learning” (except that we secretly produced the prior by our complicated and handcrafted tree structure that applies specifically to this problem). This got a bit better over time, but there was still a significant amount of complex, hardcoded structure behind the scenes that was never really justified.
This drove me crazy in cognitive science. There was a huge wave of Bayesian models of cognition in the late 2000’s/2010’s, which was partially motivated by the simplicity and generality of it (you can formulate any learning task this way). But then they swept all of the complexity into the priors and likelihood! “Look at our simple model of word learning” (except that we secretly produced the prior by our complicated and handcrafted tree structure that applies specifically to this problem). This got a bit better over time, but there was still a significant amount of complex, hardcoded structure behind the scenes that was never really justified.