A consequence of availability bias: the less you understand what other people do, the easier “in principle” it seems.
By “in principle” I mean that you wouldn’t openly call it easy, because the work obviously requires specialized knowledge you don’t have, and cannot quickly acquire. But it seems like for people who already have the specialized knowledge, it should be relatively straightforward.
“It’s all just a big black box for me, but come on, it’s only one black box, don’t act like it’s hundreds of boxes.”
as opposed to:
“It’s a transparent box with hundreds of tiny gadgets. Of course it takes a lot of time to get it right!”
Seems quite different to me. D-K effect is “you overestimate how good you are at something”, while what I describe does not even involve a belief that you are good at the specific thing, only that—despite knowing nothing about it on the object level—you still have the meta-level ability of estimating how difficult it is “in principle”.
An example of what I meant would be a manager in an IT company, who has absolutely no idea what “fooing the bar” means, but feels quite certain that it shouldn’t take more than three days, including the analysis and testing.
While an example of D-K would be someone who writes a horrible code, but believes to be the best programmer ever. (And after looking at other people’s code, keeps the original conviction, because the parts of the code he understood he could obviously write too, and the parts of the code he didn’t understand are obviously written wrong.)
I may be misunderstanding the connection with the availability heuristic, but it seems to me that you’re correct, and this is more closely related to the Dunning-Kruger effect.
What Dunning and Kruger observed was that someone who is sufficiently incompetent at a task is unable to distinguish competent work from incompetent work, and is more likely to overestimate the quality of their own work compared to others, even after being presented with the work of others who are more competent. What Viliam is describing is the inability to see what makes a task difficult, due to unfamiliarity with what is necessary to complete that task competently. I can see how this might relate to the availability heuristic; if I ask myself “how hard is it to be a nurse?”, I can readily think of encounters I’ve had with nurses where they did some (seemingly) simple task and moved on. This might give the illusion that the typical day at work for a nurse is a bunch of (seemingly) simple tasks with patients like me.
When we are talking about science, social science, history or other similar disciplines the disparity may arise from the fact most introductory texts present the main ideas which are already well understood and well articulated, whereas the actual researchers spend the vast majority of their time on poorly understood edge cases of those ideas (it is almost tautological to say that the harder and less understood part of your work takes up more time since the well understood ideas are often called such because they no longer require a lot of time and effort).
That looks like a useful way of decreasing this failure mode, which I suspect we LWers are especially susceptible to.
Does anyone know any useful measures (or better yet heuristics) for how many gears are inside various black boxes? Kolmogorov complexity (from Solomonoff induction) is useless here, but I have this vague idea that chaos theory systems > weather forecasting > average physics simulation > simple math problems I can solve exactly by hand
However, that’s not really useful if I want to know how long it would take to do something novel. For example, I’m currently curious how long it would take to design a system for doing useful computation using more abstract functions instead of simple Boolean logic. Is this a weekend of tinkering for someone who knows what they are doing? Or a thousand people working for a thousand years?
I could look at how long it took to design some of the first binary or ternary computers, and then nudge it up by an order of magnitude or two. However, I could also look at how long it takes to write a simple lambda-calculus compiler, and nudge up from that. So, that doesn’t narrow it down much.
How should I even go about making a Fermi approximation here? And, by extension, what generalized principles can we apply to estimate the size of such black boxes, without knowing about any specific gears inside?
Could you provide a simple linkage as to why the effect(less I know, easier it seems for the other specialized person) is a consequence of the availability bias?
One connection I could draw from the effect to the availability bias is the ease of recall of the less specialized person of successful resolutions of the specialized person. For example, a manager who has numerous recollections of being presented a problem and assigning it to the subordinate for a fix. The manager only sees the problem and the eventual fix, and none of the difficult roadblocks encountered by the workers, therefore the manager tends to underestimate the difficulty. I’m not sure if this is a connection you would agree with.
Indeed, “manager” was the example I had in mind while writing this.
Could you provide a simple linkage as to why the effect (less I know, easier it seems for the other specialized person) is a consequence of the availability bias?
I am not aware of a research; this is from personal experience. In my experience, it seems to help when instead of one big black box you describe the work to the management as multiple black boxes. For example, instead of “building an artificial intelligence” you split it into “making a user interface for the AI”, “designing a database structure for the AI”, “testing the AI”, etc. Then, if the managers have an intuitive idea of how much an unknown work takes (e.g. three days per black box), they agree that the more black boxes there are, the more days it will take.
(On the other hand, this can also get horribly wrong if the managers—by the virtue of “knowing” what the original black box consists of—become overconfident in their understanding of the problem, and start giving you specific suggestions, such as to leave out some of the suggested smaller black boxes, because their labels don’t feel important. Or inviting an external expert to solve one of the smaller black boxes as a thing separate from the rest of the problem, based on the manager’s superficial understanding; so the expert will produce something irrelevant for your project in exchange for half of your budget, which you now have to include somehow and pretend to be grateful for it.)
A consequence of availability bias: the less you understand what other people do, the easier “in principle” it seems.
By “in principle” I mean that you wouldn’t openly call it easy, because the work obviously requires specialized knowledge you don’t have, and cannot quickly acquire. But it seems like for people who already have the specialized knowledge, it should be relatively straightforward.
“It’s all just a big black box for me, but come on, it’s only one black box, don’t act like it’s hundreds of boxes.”
as opposed to:
“It’s a transparent box with hundreds of tiny gadgets. Of course it takes a lot of time to get it right!”
Isn’t this very closely related to the Dunning-Kruger effect?
Seems quite different to me. D-K effect is “you overestimate how good you are at something”, while what I describe does not even involve a belief that you are good at the specific thing, only that—despite knowing nothing about it on the object level—you still have the meta-level ability of estimating how difficult it is “in principle”.
An example of what I meant would be a manager in an IT company, who has absolutely no idea what “fooing the bar” means, but feels quite certain that it shouldn’t take more than three days, including the analysis and testing.
While an example of D-K would be someone who writes a horrible code, but believes to be the best programmer ever. (And after looking at other people’s code, keeps the original conviction, because the parts of the code he understood he could obviously write too, and the parts of the code he didn’t understand are obviously written wrong.)
I may be misunderstanding the connection with the availability heuristic, but it seems to me that you’re correct, and this is more closely related to the Dunning-Kruger effect.
What Dunning and Kruger observed was that someone who is sufficiently incompetent at a task is unable to distinguish competent work from incompetent work, and is more likely to overestimate the quality of their own work compared to others, even after being presented with the work of others who are more competent. What Viliam is describing is the inability to see what makes a task difficult, due to unfamiliarity with what is necessary to complete that task competently. I can see how this might relate to the availability heuristic; if I ask myself “how hard is it to be a nurse?”, I can readily think of encounters I’ve had with nurses where they did some (seemingly) simple task and moved on. This might give the illusion that the typical day at work for a nurse is a bunch of (seemingly) simple tasks with patients like me.
When we are talking about science, social science, history or other similar disciplines the disparity may arise from the fact most introductory texts present the main ideas which are already well understood and well articulated, whereas the actual researchers spend the vast majority of their time on poorly understood edge cases of those ideas (it is almost tautological to say that the harder and less understood part of your work takes up more time since the well understood ideas are often called such because they no longer require a lot of time and effort).
See also: The apprenticeship of observation.
That looks like a useful way of decreasing this failure mode, which I suspect we LWers are especially susceptible to.
Does anyone know any useful measures (or better yet heuristics) for how many gears are inside various black boxes? Kolmogorov complexity (from Solomonoff induction) is useless here, but I have this vague idea that
chaos theory systems > weather forecasting > average physics simulation > simple math problems I can solve exactly by hand
However, that’s not really useful if I want to know how long it would take to do something novel. For example, I’m currently curious how long it would take to design a system for doing useful computation using more abstract functions instead of simple Boolean logic. Is this a weekend of tinkering for someone who knows what they are doing? Or a thousand people working for a thousand years?
I could look at how long it took to design some of the first binary or ternary computers, and then nudge it up by an order of magnitude or two. However, I could also look at how long it takes to write a simple lambda-calculus compiler, and nudge up from that. So, that doesn’t narrow it down much.
How should I even go about making a Fermi approximation here? And, by extension, what generalized principles can we apply to estimate the size of such black boxes, without knowing about any specific gears inside?
Could you provide a simple linkage as to why the effect(less I know, easier it seems for the other specialized person) is a consequence of the availability bias?
One connection I could draw from the effect to the availability bias is the ease of recall of the less specialized person of successful resolutions of the specialized person. For example, a manager who has numerous recollections of being presented a problem and assigning it to the subordinate for a fix. The manager only sees the problem and the eventual fix, and none of the difficult roadblocks encountered by the workers, therefore the manager tends to underestimate the difficulty. I’m not sure if this is a connection you would agree with.
Indeed, “manager” was the example I had in mind while writing this.
I am not aware of a research; this is from personal experience. In my experience, it seems to help when instead of one big black box you describe the work to the management as multiple black boxes. For example, instead of “building an artificial intelligence” you split it into “making a user interface for the AI”, “designing a database structure for the AI”, “testing the AI”, etc. Then, if the managers have an intuitive idea of how much an unknown work takes (e.g. three days per black box), they agree that the more black boxes there are, the more days it will take.
(On the other hand, this can also get horribly wrong if the managers—by the virtue of “knowing” what the original black box consists of—become overconfident in their understanding of the problem, and start giving you specific suggestions, such as to leave out some of the suggested smaller black boxes, because their labels don’t feel important. Or inviting an external expert to solve one of the smaller black boxes as a thing separate from the rest of the problem, based on the manager’s superficial understanding; so the expert will produce something irrelevant for your project in exchange for half of your budget, which you now have to include somehow and pretend to be grateful for it.)