Mental Models

Related: Fake explanation, Guessing the teachers password, Understanding your understanding, many more

The mental model concept gets used so frequently and seems so intuitively obvious that I debated whether to bother writing this. But beyond the basic value that comes from unpacking our intuitions, it turns out that the concept allows a pretty impressive integration and streamlining of a wide range of mental phenomena.

The basics: a mental model falls under the heading of mental representations, ways that the brain stores information. It’s a specific sort of mental representation—one who’s conceptual structure matches some corresponding structure in reality. In short, mental models are how we think something works.
A mental model begins life as something like an explanatory black box—a mere correlation between items, without any understanding of the mechanism at work. “Flick switch → lamp turns on” for example. But a mere correlation doesn’t give you much clue as to what’s actually happening. If something stops working—if you hit the switch and the light doesn’t go on—you don’t have many clues as to why. This pre-model stage lacks the most important and useful portion; moving parts.
The real power of mental models comes from putting something inside this black box - moving parts that you can fiddle with to give you an idea of how something actually works. My basic lamp model will be improved quite a bit if I add the concept of a circuit to it, for instance. Once I’ve done that, the model becomes “Flick switch → switch completes circuit → electricity flows through lightbulb-> lamp turns on”. Now if the light doesn’t go on, I can play with my model to see what might cause that, finding that either the circuit is broken or no electricity is being provided. We learn from models the same way we learn from reality, by moving the parts around and seeing the results.
It usually doesn’t take much detail, many moving parts, for something to “click” and make sense. For instance, I had a difficult time grasping the essence of imaginary numbers until I saw them modeled as a rotation, which instantly made all the bits and pieces I had gleaned about them fall into place. A great deal of understanding rests in getting a few small details right. And once the basics are right, additional knowledge often changes very little. After you have the basic understanding of a circuit, learning about resistance and capacitors and alternating vs direct current won’t change much about your lamp model. Because of this, the key to understanding something is often getting the basic model right—I suspect bursts of insight, a-ha moments, and magical “clicks” are often new mental models suddenly taking shape.
Now let’s really open this concept up, and see what it can do. For starters, the reason analogies and metaphors are so damn useful (and can be so damn misleading) is that they’re little more than pre-assembled mental models for something. Diagrams provide their explanatory mechanism through essentially the same principle. Phillip Johnson-Laird has formulated the processes of induction and deduction in terms of adjustments made to mental models. And building from the scenario concept used by Kahneman and Tversky, he’s formulated a method of probabilistic thinking with them as well. Much of the work on heuristics and biases, in fact, either dovetails very nicely with mental models or can be explained by them directly.
For example, the brain seems to have a strong bias towards modifying an existing model vs. replacing it with a new one. Often in real life “updating” means “changing your underlying model”, and the fact that we prefer not to causes us to make systematic errors. You see this writ large all the time with (among other things) people endlessly tweaking a theory that fails to explain the data, rather than throwing it out. Ptomely’s epicycles would be the prototypical example. Confirmation bias, various attribution biases and various data neglect biases can all be interpreted as favoring the models we already have.
The brain’s favorite method for building models is to take parts from something else it already understands. Our best and most extensive experience is with objects moving in the physical world, so our models are often expressed in terms of physical objects moving about. They are, essentially, acting as intuition pumps. Of course, all models are wrong, but some are useful—as Dennett points out, converting problems to examples of something more familiar often allows us to solve problems much more easily.
One of the major design flaws of using mental models (aside from the biases they induce) is that our mental models always tend to feel like understanding, regardless of how many moving parts they have. So, for example, if the teacher asks “why does fire burn” and I answer “because it’s hot”, it feels like a real explanation, even if there’s not any moving parts that might explain what ‘burn’ or ‘hot’ actually mean. I suspect a bias towards short causal chains may be involved here. Of course, if the model stops working, or you find yourself needing to explain yourself, it becomes quite obvious that you do not, in fact, have the understanding that you thought you did. And unpacking what turns out to be an empty box is a fantastic way to trigger cognitive dissonance, which can have the nasty effect of burrowing your flawed model in even deeper.
So how can we maximize our use of mental models? Johnson-Laird tells us that “any factor that makes it easier for individuals to flesh out explicit models of the premises should improve performance.” Making clear what the moving parts are and couching it in terms of something already understood is going to help us build a better model, and a better model is equivalent to a better understanding. Again, this is not particularly groundbreaking—any problem solving technique will likely have the same insights.
Ultimately, the mental model concept is itself just a model. I’m not familiar enough with the psychological literature to know if mental models are really the correct way to explain mental functions, or if it’s merely another in a long list of similar concepts—belief, schema, framework, cognitive map, etc. But the fact that it’s intuitively obvious and that it explains a large swath of brain function (without being too general) suggests that it’s a useful concept to carry around, so I’ll continue to do so until I have evidence that it’s wrong.
-
Sources:
Mental models and probabilistic thinking—Johnson-Laird (1994)
Mental models concepts for system dynamics research—Doyle and Ford (1998)
The Design of Everyday Things, Norman
Using concept maps to reveal conceptual typologies—Hay and Kinchin (2006)