Regarding the “status quo bias” example with the utility company, I think it’s fallacious, or at least misleading. For realistic typical humans with all their intellectual limitations, it is rational to favor the status quo when someone offers to change a deal that has so far worked tolerably well in ways that, for all you know, could have all sorts of unintended consequences. (And not to mention the swindles that might be hiding in the fine print.)
Moreover, if the utility company had actually started selling different deals rather than just conducting a survey about hypotheticals, it’s not like typical folks would have stubbornly held to unfavorable deals for years. What happens in such situations is that a clever minority figures out that the new deal is indeed more favorable and switches—and word about their good experience quickly spreads and soon becomes conventional wisdom, which everyone else then follows.
This is how human society works normally—what you call “status quo bias” is a highly beneficial heuristic that prevents people from ruining their lives. It makes them stick to what’s worked well so far instead of embarking on attractive-looking, but potentially dangerous innovations. When this mechanism breaks down, all kinds of collective madness can follow (speculative bubbles and Ponzi schemes being the prime examples). Generally, it is completely rational to favor a tolerably good status quo even if some calculation tells you that an unconventional change might be beneficial, unless you’re very confident in your competence to do that calculation, or you know of other people’s experiences that have confirmed it.
I would suggest something even stronger: the people exhibiting the “status quo bias” in the utility example are correct. The fact that a deal has worked out tolerably well in the real world is information and indicates that the deal has no hidden gotchas that the alternative might have. Bayseianism demands considering this information.
Where this gets confusing is the comparison between the two groups of customers, each starting out with the opposite plan. However, the customers don’t have the same information—one group of customers knows that one plan is tolerable, and the other group knows that the other plan is tolerable. Given this difference in information, it is rational for each group to stick with the plan that they have. It is true, of course, that both groups of customers cannot actually be better off than the other, but all that that means is that if you make a decision that is probabilistically best for you, you can still get unlucky—each customer rationally concluded that the other plan had a higher chance of having a gotcha than a plan they know about, and that does not become irrational just because it turns out the other plan didn’t have a gotcha after all.
I think the utility company example is fine. Lots of biases can be described as resulting from the use of a pretty good heuristic which leads people astray in that particular case, but that’s still a cost of imperfect thinking. And this was a case where the alternative to the status quo was relatively simple—it was defined precisely and differed on only a small number of easily understandable dimensions—so concerns about swindles, unintended consequences, or limited understanding of complex changes shouldn’t play a big role here.
In real life, social processes might eventually overcome the status quo bias, but there’s still a lot of waste in the interim which the clever (aka more rational) minority would be able to avoid. Actually, in this case the change to utility coverage would probably have to be made for a whole neighborhood at once, so I don’t think that your model of social change would work.
I’d say the utility company example is, in an important sense, the mirror image of the Albanian example. In both cases, we have someone approaching the common folk with a certain air of authority and offering some sort of deal that’s supposed to sound great. In the first case, people reject a favorable deal (though only in the hypothetical) due to the status quo bias, and in the second case, people enthusiastically embrace what turns out to be a pernicious scam. At least superficially, this seems like the same kind of bias, only pointed in opposite directions.
Now, while I can think of situations where the status quo bias has been disastrous for some people, and even situations where this bias might lead to great disasters and existential risks, I’d say that in the huge majority of situations, the reluctance to embrace changes that are supposed to improve what already works tolerably well is an important force that prevents people from falling for various sorts of potentially disastrous scams like those that happened in Albania. This is probably even more true when it comes to the mass appeal of radical politics. Yes, it would be great if people’s intellects were powerful and unbiased enough to analyze every idea with pristine objectivity and crystal analytical clarity, but since humans are what they are, I’m much happier if they’re harder to convince to change things that are already functioning adequately.
Therefore, I’m inclined to believe that a considerable dose of status quo bias is optimal from a purely consequentialist perspective. Situations where the status quo bias is gravely dangerous are far from nonexistent, but still exceptional, whereas when it comes to the opposite sort of danger, every human society is sitting on a powder keg all of the time.
Please keep it. It’s a great example. Of course the status quo bias is a useful heuristic sometimes, but that doesn’t mean consumers shouldn’t be aware of it and think things through.
Some things that look like biases are not so much, when looked at from a situational perspective. Taleb quotes the example of hyperbolic discounting (HD).
In HD people apply a much higher discount rate between e.g., today and tomorrow than bettween one year from not and one year and one day from now. Taleb argues that this can be rational if the person may now pay up at all i.e., credit risk. A person is much more likely to pay up now than tomorrow, because they are here today, but tomorrow they could be spending the money in Rio. In contrast the difference in credit risk between 365 and 366 days is negligible.
There’s another reason that it is reasonable to say no if a utility company offers to improve your service for more money.
Namely, we already know that they do not care about your service, but about their profits, and about the service only insofar as it helps their profits. So it is quite likely that there will be no real improvement to your service, but it will remain approximately the same. The company rightly expects that you will not keep careful track and that you will assume that your service has improved. Or if it does improve, it will not improve as much as they said, because they know you will not be keeping track well, and that even if you do, you will not have much recourse.
This is even more the case if some other company offers to replace your service, saying that you will get better service at a lower price. In Italy utility companies send roaming people offering this to everyone. If you accept, you will get worse service at a higher price, and you will have no legal recourse, because what they said about the price was technically true in terms of the base price, but false once all the calculations are done.
someone offers to change a deal that has so far worked tolerably well in ways that, for all you know, could have all sorts of unintended consequences
This exact thing happened to me last year. I signed up for a great new deal and now it has blown up in my face. The cost of safely switching from a fairly satisfactory status quo can be high—high R&D costs—especially when you are dealing with crooks and charlatans.
Hayek, the knowledge problem man, himself makes the argument* that most often it is best to select the norm. That this norm is the product of lots of calculation that would be expensive for you to redo.
I think it was Thoreau who wrote a story about a man that each day on awaking would remember nothing from the day before; who would then have to rediscover the use of a chair and pencil. This man could only get so far in life.
The rational man knows that he can only get so far in life if he is always re-calculating instead of working off of what others have done.
One of the most important skills to develop is the skill of knowing when when you need to re-calculate.
One reference would be in the first part of law legislation and liberty (v1)
I think it was Thoreau who wrote a story about a man that each day on awaking would remember nothing from the day before; who would then have to rediscover the use of a chair and pencil. This man could only get so far in life.
Anyone know what story? It sounds interesting. Also see the film Memento.
Regarding the “status quo bias” example with the utility company, I think it’s fallacious, or at least misleading. For realistic typical humans with all their intellectual limitations, it is rational to favor the status quo when someone offers to change a deal that has so far worked tolerably well in ways that, for all you know, could have all sorts of unintended consequences. (And not to mention the swindles that might be hiding in the fine print.)
Moreover, if the utility company had actually started selling different deals rather than just conducting a survey about hypotheticals, it’s not like typical folks would have stubbornly held to unfavorable deals for years. What happens in such situations is that a clever minority figures out that the new deal is indeed more favorable and switches—and word about their good experience quickly spreads and soon becomes conventional wisdom, which everyone else then follows.
This is how human society works normally—what you call “status quo bias” is a highly beneficial heuristic that prevents people from ruining their lives. It makes them stick to what’s worked well so far instead of embarking on attractive-looking, but potentially dangerous innovations. When this mechanism breaks down, all kinds of collective madness can follow (speculative bubbles and Ponzi schemes being the prime examples). Generally, it is completely rational to favor a tolerably good status quo even if some calculation tells you that an unconventional change might be beneficial, unless you’re very confident in your competence to do that calculation, or you know of other people’s experiences that have confirmed it.
Replying to old post...
I would suggest something even stronger: the people exhibiting the “status quo bias” in the utility example are correct. The fact that a deal has worked out tolerably well in the real world is information and indicates that the deal has no hidden gotchas that the alternative might have. Bayseianism demands considering this information.
Where this gets confusing is the comparison between the two groups of customers, each starting out with the opposite plan. However, the customers don’t have the same information—one group of customers knows that one plan is tolerable, and the other group knows that the other plan is tolerable. Given this difference in information, it is rational for each group to stick with the plan that they have. It is true, of course, that both groups of customers cannot actually be better off than the other, but all that that means is that if you make a decision that is probabilistically best for you, you can still get unlucky—each customer rationally concluded that the other plan had a higher chance of having a gotcha than a plan they know about, and that does not become irrational just because it turns out the other plan didn’t have a gotcha after all.
I think the utility company example is fine. Lots of biases can be described as resulting from the use of a pretty good heuristic which leads people astray in that particular case, but that’s still a cost of imperfect thinking. And this was a case where the alternative to the status quo was relatively simple—it was defined precisely and differed on only a small number of easily understandable dimensions—so concerns about swindles, unintended consequences, or limited understanding of complex changes shouldn’t play a big role here.
In real life, social processes might eventually overcome the status quo bias, but there’s still a lot of waste in the interim which the clever (aka more rational) minority would be able to avoid. Actually, in this case the change to utility coverage would probably have to be made for a whole neighborhood at once, so I don’t think that your model of social change would work.
I’d say the utility company example is, in an important sense, the mirror image of the Albanian example. In both cases, we have someone approaching the common folk with a certain air of authority and offering some sort of deal that’s supposed to sound great. In the first case, people reject a favorable deal (though only in the hypothetical) due to the status quo bias, and in the second case, people enthusiastically embrace what turns out to be a pernicious scam. At least superficially, this seems like the same kind of bias, only pointed in opposite directions.
Now, while I can think of situations where the status quo bias has been disastrous for some people, and even situations where this bias might lead to great disasters and existential risks, I’d say that in the huge majority of situations, the reluctance to embrace changes that are supposed to improve what already works tolerably well is an important force that prevents people from falling for various sorts of potentially disastrous scams like those that happened in Albania. This is probably even more true when it comes to the mass appeal of radical politics. Yes, it would be great if people’s intellects were powerful and unbiased enough to analyze every idea with pristine objectivity and crystal analytical clarity, but since humans are what they are, I’m much happier if they’re harder to convince to change things that are already functioning adequately.
Therefore, I’m inclined to believe that a considerable dose of status quo bias is optimal from a purely consequentialist perspective. Situations where the status quo bias is gravely dangerous are far from nonexistent, but still exceptional, whereas when it comes to the opposite sort of danger, every human society is sitting on a powder keg all of the time.
That is also a very good point against the utility company example.
I think I’ll remove it, unless somebody persuasively argues in its favor in a few hours or so.
Why don’t you keep it, but add a note?
I ended up adding a brief note linking to these comments.
Please keep it. It’s a great example. Of course the status quo bias is a useful heuristic sometimes, but that doesn’t mean consumers shouldn’t be aware of it and think things through.
Some things that look like biases are not so much, when looked at from a situational perspective. Taleb quotes the example of hyperbolic discounting (HD).
In HD people apply a much higher discount rate between e.g., today and tomorrow than bettween one year from not and one year and one day from now. Taleb argues that this can be rational if the person may now pay up at all i.e., credit risk. A person is much more likely to pay up now than tomorrow, because they are here today, but tomorrow they could be spending the money in Rio. In contrast the difference in credit risk between 365 and 366 days is negligible.
There’s another reason that it is reasonable to say no if a utility company offers to improve your service for more money.
Namely, we already know that they do not care about your service, but about their profits, and about the service only insofar as it helps their profits. So it is quite likely that there will be no real improvement to your service, but it will remain approximately the same. The company rightly expects that you will not keep careful track and that you will assume that your service has improved. Or if it does improve, it will not improve as much as they said, because they know you will not be keeping track well, and that even if you do, you will not have much recourse.
This is even more the case if some other company offers to replace your service, saying that you will get better service at a lower price. In Italy utility companies send roaming people offering this to everyone. If you accept, you will get worse service at a higher price, and you will have no legal recourse, because what they said about the price was technically true in terms of the base price, but false once all the calculations are done.
This exact thing happened to me last year. I signed up for a great new deal and now it has blown up in my face. The cost of safely switching from a fairly satisfactory status quo can be high—high R&D costs—especially when you are dealing with crooks and charlatans.
I agree,
Hayek, the knowledge problem man, himself makes the argument* that most often it is best to select the norm. That this norm is the product of lots of calculation that would be expensive for you to redo.
I think it was Thoreau who wrote a story about a man that each day on awaking would remember nothing from the day before; who would then have to rediscover the use of a chair and pencil. This man could only get so far in life.
The rational man knows that he can only get so far in life if he is always re-calculating instead of working off of what others have done.
One of the most important skills to develop is the skill of knowing when when you need to re-calculate.
One reference would be in the first part of law legislation and liberty (v1)
Anyone know what story? It sounds interesting. Also see the film Memento.