Maybe an information-theoretic viewpoint would be useful.
The problem with such qualitative models is that they don’t communicate a well-defined algorithm but instead define a distribution over a set of algorithms, with the exact distribution dependent on the reader’s model of the world (as you have been seeing). It’s not a limitation of you or a limitation of the model per se, it’s more a limitation of language. Language allows us to communicate complex concepts compactly, assuming that the party we are communicating with has some model of the world that is similar to ours. This comes at a price: the more complicated the concept, the more it has to be made vague during transmission, and the more it becomes sensitive to smaller variations in differences between world models.
I’d guess the most common mistake people make when trying to internalize qualitative models is: They don’t ask about them! Language is interactive and when something is vague we seek clarification. It’s what you should do, and do much more often than you’re probably currently doing. Ultimately, you only have a finite amount of information to go on, and a linear increase in information results in an exponential decrease of your probability space. Every qualitative model comes with hidden ‘baggage’ which is the creator’s world model. Without knowing what this model is, there’s nothing you can do. You can either guess it or ask about it, and asking gives you far more information.
It’s difficult when the creators are dead, or otherwise unaccessible (like busy hedge fundies). The next best thing are students who were mentored under the creator of the paradigm and are considered experts, but then the same check has to be applied to them on whether or not the ideas can be discussed. Overall I like the approach, it might still be possible to find journals, biographies or interviews with the originator of the viewpoint, as these are likely to contain some form of inquiry.
Maybe an information-theoretic viewpoint would be useful.
The problem with such qualitative models is that they don’t communicate a well-defined algorithm but instead define a distribution over a set of algorithms, with the exact distribution dependent on the reader’s model of the world (as you have been seeing). It’s not a limitation of you or a limitation of the model per se, it’s more a limitation of language. Language allows us to communicate complex concepts compactly, assuming that the party we are communicating with has some model of the world that is similar to ours. This comes at a price: the more complicated the concept, the more it has to be made vague during transmission, and the more it becomes sensitive to smaller variations in differences between world models.
I’d guess the most common mistake people make when trying to internalize qualitative models is: They don’t ask about them! Language is interactive and when something is vague we seek clarification. It’s what you should do, and do much more often than you’re probably currently doing. Ultimately, you only have a finite amount of information to go on, and a linear increase in information results in an exponential decrease of your probability space. Every qualitative model comes with hidden ‘baggage’ which is the creator’s world model. Without knowing what this model is, there’s nothing you can do. You can either guess it or ask about it, and asking gives you far more information.
It’s difficult when the creators are dead, or otherwise unaccessible (like busy hedge fundies). The next best thing are students who were mentored under the creator of the paradigm and are considered experts, but then the same check has to be applied to them on whether or not the ideas can be discussed. Overall I like the approach, it might still be possible to find journals, biographies or interviews with the originator of the viewpoint, as these are likely to contain some form of inquiry.