Tags like “stupid,” “bad at __”, “sloppy,” and so on, are ways of saying “You’re performing badly and I don’t know why.” Once you move it to “you’re performing badly because you have the wrong fingerings,” or “you’re performing badly because you don’t understand what a limit is,” it’s no longer a vague personal failing but a causal necessity. Anyone who never understood limits will flunk calculus. It’s not you, it’s the bug.
-celandine13 (Hat-tip to Frank Adamek. In addition, the linked article is so good that I had trouble picking something to put in rationality quotes; in other words, I recommend it.)
Another quote from the same piece, just before that para:
Once you start to think of mistakes as deterministic rather than random, as caused by “bugs” (incorrect understanding or incorrect procedures) rather than random inaccuracy, a curious thing happens.
You stop thinking of people as “stupid.”
I really, really like this. Thanks for posting it!
To elucidate the “bug model” a bit, consider “bugs” not in a single piece of software, but in a system. The following is drawn from my professional experience as a sysadmin for large-scale web applications, but I’ve tried to make it clear:
Suppose that you have a web server; or better yet, a cluster of servers. It’s providing some application to users — maybe a wiki, a forum, or a game. Most of the time when a query comes in from a user’s browser, the server gives a good response. However, sometimes it gives a bad response — maybe it’s unusually slow, or it times out, or it gives an error or an incomplete page instead of what the user was looking for.
It turns out that if you want to fix these sorts of problems, considering them merely to be “flakiness” and stopping there is not enough. You have to actually find out where the errors are coming from. “Flaky web server” is an aggregate property, not a simple one; specifically, it is the sum of all the different sources of error, slowness, and other badness — the disk contention; the database queries against un-indexed tables; the slowly failing NIC; the excess load from the web spider that’s copying the main page ten times a second looking for updates; the design choice of retrying failed transactions repeatedly, thus causing overload to make itself worse.
There is some fact of the matter about which error sources are causing more failures than others, too. If 1% of failed queries are caused by a failing NIC, but 90% are caused by transactions timing out due to slow database queries to an overloaded MySQL instance, then swapping the NIC out is not going to help much. And two flaky websites may be flaky for completely unrelated reasons.
Talking about how flaky or reliable a web server is lets you compare two web servers side-by-side and decide which one is preferable. But by itself it doesn’t let you fix anything. You can’t just point at the better web server and tell the worse one, “Why can’t you be more like your sister?” — or rather, you can, but it doesn’t work. The differences between the two do matter, but you have to know which differences matter in order to actually change things.
To bring the analogy back to human cognitive behavior: yes, you can probably measure which of two people is “more rational” than the other, or even “more intelligent”. But if someone wants to become more rational, they can’t do it by just trying to imitate an exemplary rational person — they have to actually diagnose what kinds of not-rational they are being, and find ways to correct them. There is no royal road to rationality; you have to actually struggle with (or work around) the specific bugs you have.
Once you start to think of mistakes as deterministic rather than random, as caused by “bugs” (incorrect understanding or incorrect procedures) rather than random inaccuracy, a curious thing happens.
You stop thinking of people as “stupid.”
I agree with the general thrust of the essay (that broad, fuzzy labels like “bad at” are more useful if reduced to specific bug descriptions,) but I’ll note that being aware of the specific bugs that cause people to make the mistakes they’re making does not stop me from thinking of people as stupid. If a person’s bugs are numerous, obtrusive, and difficult to correct, I’m going to end up thinking of them as stupid even if I can describe every bug.
I’ve been trying to change my impulse to think “this person is an idiot!” into “this person is a noob,” because the term still kinda has that slightly useful predictive meaning that suggests incompetence, but it also contains the idea that they have the potential to get better, rather than being inherently incompetent.
If the attacker, whenever he pulls a red ball out of the urn, puts it back and keeps pulling until he gets a blue ball, the Bayesian “rational mind” will conclude that the urn is entirely full of blue balls. … [But it’s] not approaching the most important job of teachers, which is to figure out why you’re getting things wrong—what conceptual misunderstanding, or what bad study habit, is behind your problems.
I don’t know much about Knewton, but it seems like it could address this—at least in some cases—and possibly better than teachers. Knewton and programs like it can keep track of success rates at the individual problem level, rather than the test or semester level. Such data could be used to identify the ‘bugs’ the author speaks of. All Knewton needs is knowledge of common ‘bugs’ and what problems they make students get wrong.
-celandine13 (Hat-tip to Frank Adamek. In addition, the linked article is so good that I had trouble picking something to put in rationality quotes; in other words, I recommend it.)
Another quote from the same piece, just before that para:
I really, really like this. Thanks for posting it!
To elucidate the “bug model” a bit, consider “bugs” not in a single piece of software, but in a system. The following is drawn from my professional experience as a sysadmin for large-scale web applications, but I’ve tried to make it clear:
Suppose that you have a web server; or better yet, a cluster of servers. It’s providing some application to users — maybe a wiki, a forum, or a game. Most of the time when a query comes in from a user’s browser, the server gives a good response. However, sometimes it gives a bad response — maybe it’s unusually slow, or it times out, or it gives an error or an incomplete page instead of what the user was looking for.
It turns out that if you want to fix these sorts of problems, considering them merely to be “flakiness” and stopping there is not enough. You have to actually find out where the errors are coming from. “Flaky web server” is an aggregate property, not a simple one; specifically, it is the sum of all the different sources of error, slowness, and other badness — the disk contention; the database queries against un-indexed tables; the slowly failing NIC; the excess load from the web spider that’s copying the main page ten times a second looking for updates; the design choice of retrying failed transactions repeatedly, thus causing overload to make itself worse.
There is some fact of the matter about which error sources are causing more failures than others, too. If 1% of failed queries are caused by a failing NIC, but 90% are caused by transactions timing out due to slow database queries to an overloaded MySQL instance, then swapping the NIC out is not going to help much. And two flaky websites may be flaky for completely unrelated reasons.
Talking about how flaky or reliable a web server is lets you compare two web servers side-by-side and decide which one is preferable. But by itself it doesn’t let you fix anything. You can’t just point at the better web server and tell the worse one, “Why can’t you be more like your sister?” — or rather, you can, but it doesn’t work. The differences between the two do matter, but you have to know which differences matter in order to actually change things.
To bring the analogy back to human cognitive behavior: yes, you can probably measure which of two people is “more rational” than the other, or even “more intelligent”. But if someone wants to become more rational, they can’t do it by just trying to imitate an exemplary rational person — they have to actually diagnose what kinds of not-rational they are being, and find ways to correct them. There is no royal road to rationality; you have to actually struggle with (or work around) the specific bugs you have.
I agree with the general thrust of the essay (that broad, fuzzy labels like “bad at” are more useful if reduced to specific bug descriptions,) but I’ll note that being aware of the specific bugs that cause people to make the mistakes they’re making does not stop me from thinking of people as stupid. If a person’s bugs are numerous, obtrusive, and difficult to correct, I’m going to end up thinking of them as stupid even if I can describe every bug.
I read the article because of your post; thank you.
(obviously the grandparent deserves credit too).
Author used to post here as __, but I think her account’s been deleted.
ETA: removed username as I realized this comment kind of frustrates the presumable point of the account deletion in the first place.
I already upvoted this but want to emphasize that the article is really good.
My favorite sentence in it: “Are there no stupid people left?”
I’ve been trying to change my impulse to think “this person is an idiot!” into “this person is a noob,” because the term still kinda has that slightly useful predictive meaning that suggests incompetence, but it also contains the idea that they have the potential to get better, rather than being inherently incompetent.
Excellent article, thank you for the link!
Great article. One thing:
I don’t know much about Knewton, but it seems like it could address this—at least in some cases—and possibly better than teachers. Knewton and programs like it can keep track of success rates at the individual problem level, rather than the test or semester level. Such data could be used to identify the ‘bugs’ the author speaks of. All Knewton needs is knowledge of common ‘bugs’ and what problems they make students get wrong.
This article also recalls to mind http://lesswrong.com/lw/6ww/when_programs_have_to_work_lessons_from_nasa/, specifically the part where problems are considered to be the fault of the system, not of the people involved and are treated by changing the system, not by criticizing the people.