The illusion of explanatory depth (Rozenblit & Keil, 2002) seems like a particularly relevant example of that other side of the coin. If you ask people if they understand how something works, like a bicycle, a flush toilet, or a zipper, they’ll generally say that, yes, they understand it and could explain it. But if you ask them to draw a diagram and actually explain it, they’ll often get it wrong, and realize in the process that they don’t understand it as well as they thought they did. The main problem seems to be that people have higher-level understanding of the object, and experience using it correctly, which they confuse with a more in-depth knowledge of the mechanisms that make it work.
That doesn’t necessarily contradict AnnaSalamon’s point about stopping because of learned blankness. Seeing something stop working, and not immediately knowing why it messed up or how to fix it, might be enough to trigger that same lack of confidence that shows up after people try and fail to explain how something works. And in order to fix it you often don’t need so much depth of knowledge. Even if you don’t have enough knowledge to fully explain the mechanism that makes something work, you still might know enough to identify and fix this particular problem, especially since you have the thing right there to look at, think about, and play around with.
Rozenblit, L., & Keil, F. (2002). The misunderstood limits of folk science: An illusion of explanatory depth. Cognitive Science, 26, 521-562. pdf
Wow. That article is pure gold: the kinds of mistaken explanations they talk about are exactly what I hear from people who give unhelpful explanations—they don’t see the limits of their own understanding of the phenomenon, and obviously can’t convey what they lack. And so any explanation they give is thus extremely brittle, as they can’t do much more than swap in other terms for the mysterious concepts they invoke.
(This is not to say they’re completely unhelpful—a partial explanation is better than none at all. But in that case, it’s preferable that you clarify that your understanding is indeed limited, and can’t connect it to a broader understanding of the world.)
This is why study groups work (if you use them properly). Explaining something to someone else makes you think about it much more clearly. Finding out you don’t know about something when they ask shows holes in your knowledge.
I think that being able to clearly explain something is the mark of someone truly understanding it.
I would add that it seems common for task difficulty distribution to be skewed in various idiosyncratic ways—sufficiently common and sufficiently skewed that any uninformed generic intuition about the “noise” distribution is likely to be seriously wrong. E.g., in some fields there’s important low-hanging fruit: the first few hours of training and practice might get you 10-30% of the practical benefit of the hundreds of hours of training and practice that would be required to have a comprehensive understanding. In other fields there are large clusters of skills that become easy to learn with once you learn some skill that is a shared prerequisite for the entire cluster.
Anna’s proposal for reducing blankness seems to be useful only if the noise is systematically biased toward underestimating our ability in unfamiliar tasks.
I think how likley it is someone is to underestimating their ability in a unfamiliar task (like say plumbing or handling a computer) depends both primarily on:
the competence of specialists
the difficulty of the task
the intelligence of the individual
Its optimal for all of us to wall off parts some parts of our lives as magic. What we would gain by expending energy to explore and optimize would not outweigh the cost. The trick is realizing that most of our lives are walled off by our non-rational subsystems or just happen stance, and systematical checking these habits to see what can be improved.
At this point I’m not sure what the best way to approach this is.
Of course, the other side of the coin is the Dunning—Kruger effect which causes us to overestimate our knowledge about things we’re ignorant about.
The illusion of explanatory depth (Rozenblit & Keil, 2002) seems like a particularly relevant example of that other side of the coin. If you ask people if they understand how something works, like a bicycle, a flush toilet, or a zipper, they’ll generally say that, yes, they understand it and could explain it. But if you ask them to draw a diagram and actually explain it, they’ll often get it wrong, and realize in the process that they don’t understand it as well as they thought they did. The main problem seems to be that people have higher-level understanding of the object, and experience using it correctly, which they confuse with a more in-depth knowledge of the mechanisms that make it work.
That doesn’t necessarily contradict AnnaSalamon’s point about stopping because of learned blankness. Seeing something stop working, and not immediately knowing why it messed up or how to fix it, might be enough to trigger that same lack of confidence that shows up after people try and fail to explain how something works. And in order to fix it you often don’t need so much depth of knowledge. Even if you don’t have enough knowledge to fully explain the mechanism that makes something work, you still might know enough to identify and fix this particular problem, especially since you have the thing right there to look at, think about, and play around with.
Rozenblit, L., & Keil, F. (2002). The misunderstood limits of folk science: An illusion of explanatory depth. Cognitive Science, 26, 521-562. pdf
Wow. That article is pure gold: the kinds of mistaken explanations they talk about are exactly what I hear from people who give unhelpful explanations—they don’t see the limits of their own understanding of the phenomenon, and obviously can’t convey what they lack. And so any explanation they give is thus extremely brittle, as they can’t do much more than swap in other terms for the mysterious concepts they invoke.
(This is not to say they’re completely unhelpful—a partial explanation is better than none at all. But in that case, it’s preferable that you clarify that your understanding is indeed limited, and can’t connect it to a broader understanding of the world.)
This is why study groups work (if you use them properly). Explaining something to someone else makes you think about it much more clearly. Finding out you don’t know about something when they ask shows holes in your knowledge.
I think that being able to clearly explain something is the mark of someone truly understanding it.
Edit—please disregard this post
And here’s an OB post on evidence limiting the scope and magnitude of that effect.
I would add that it seems common for task difficulty distribution to be skewed in various idiosyncratic ways—sufficiently common and sufficiently skewed that any uninformed generic intuition about the “noise” distribution is likely to be seriously wrong. E.g., in some fields there’s important low-hanging fruit: the first few hours of training and practice might get you 10-30% of the practical benefit of the hundreds of hours of training and practice that would be required to have a comprehensive understanding. In other fields there are large clusters of skills that become easy to learn with once you learn some skill that is a shared prerequisite for the entire cluster.
Anna’s proposal for reducing blankness seems to be useful only if the noise is systematically biased toward underestimating our ability in unfamiliar tasks.
I think how likley it is someone is to underestimating their ability in a unfamiliar task (like say plumbing or handling a computer) depends both primarily on:
the competence of specialists
the difficulty of the task
the intelligence of the individual
Its optimal for all of us to wall off parts some parts of our lives as magic. What we would gain by expending energy to explore and optimize would not outweigh the cost. The trick is realizing that most of our lives are walled off by our non-rational subsystems or just happen stance, and systematical checking these habits to see what can be improved.
At this point I’m not sure what the best way to approach this is.