Knightian Uncertainty and Ambiguity Aversion: Motivation
Recently, I found myself in a conversation with someone advocating the use of Knightian uncertainty. I admitted that I’ve never found the concept compelling. We went back and forth for a little while. His points were crisp and well-supported, my objections were vague. We didn’t have enough time to reach consensus, but it became clear that I needed to research his viewpoint and flesh out my objections before being justified in my rejection.
So I did. This is the first in a short series of posts during which I explore what it means for an agent to reason using Knightian uncertainty.
In this first post, I’ll present a number of arguments claiming that Bayesian reasoning fails to capture certain desirable behavior. I’ll discuss a proposed solution, maximization of minimum expected utility, which is advocated by my friend and others.
In the second post, I’ll discuss some more general arguments against Bayesian reasoning as an idealization of human reasoning. What role should “unknown unknowns” play in a bounded Bayesian reasoner? Is “Knightian uncertainty” a useful concept that is not captured by the Bayesian framework?
In the third post, I’ll discuss the proposed solution: can rational agents display ambiguity aversion? What does it mean to have a rational agent that does not maximize expected utility, maximizing “minimum expected utility” instead?
In the final post, I’ll apply these insights to humans and articulate my objections to ambiguity aversion in general. I’ll conclude that while it is possible for agents to be ambiguity-averse, ambiguity aversion in humans is a bias. The maximization of minimum expected utility may be a useful concept for explaining how humans actually act, but probably isn’t how you should act.
The following is a stylized conversation that I had at the Stanford workshop on Logic, Rationality, and Intelligent Interaction. I’ll anonymize my friend as ‘Sir Percy’, which seems a fitting pseudonym for someone advocating Knightian uncertainty.
“I think that’s repugnant”, Sir Percy said. “I can’t assign a probability to the simulation hypothesis, because I have Knightian uncertainty about it.”
“I’ve never found Knightian uncertainty compelling” I replied with a shrug. “I don’t see how it helps to claim uncertainty about your credence. I know what it means to feel very uncertain (e.g. place a low probability on many different scenarios), and I even know what it means to expect that I’m wildly incorrect (though I never know the direction of my error). But eventually I have to act, and this involves cashing my out my uncertainty into an actual credence and weighing the odds. Even if I’m uncomfortable producing a sufficiently precise credence, even if I feel like I don’t have enough information, even though I’m probably misusing the information that I do have, I have to pick the most accurate credence I can anyway when it comes time to act.”
“Sure”, Sir Percy answered. “If you’re maximizing expected utility, then you should strive to be a perfect Bayesian, and you should always act like you assign a single credence to any given event. But I’m not maximizing expected utility.”
Woah. I blinked. I hadn’t even considered that someone could object to the concept of expected utility maximization. Expected utility maximization seemed fundamental: I understand risk aversion, and I understand caution, but at the end of the day, if I honestly expect more utility in the left branch than the right branch, then I’m taking the left branch. No further questions.
“Uh”, I said, deploying all wits to articulate my grave confusion, “wat?”
“I maximize the minimum expected utility, given my Knightian uncertainty.”
My brain struggled to catch up. Is it even possible for a rational agent to refuse to maximize expected utility? Under the assumption that people are risk-neutral with respect to utils, what does it mean for an agent to rationally refuse an outcome where they expect to get more utils? Doesn’t that merely indicate that they picked the wrong thing to call “utility”?
“Look”, Sir Percy continued. “Consider the following ‘coin toss game’. There was a coin flip, and the coin came up either heads (H) or tails (T). You don’t know whether or not the coin was weighted, and if it was, you don’t know which way it was weighted. In fact, all you know is that your credence of event H is somewhere in the interval
“That sounds like a failure of introspection”, I replied. “I agree that you might not be able to generate credences with arbitrary precision, but if you have no reason to believe that your interval is skewed towards one end or the other, then you should just act like your credence of H is in the middle of your interval (or the mean of your distribution), e.g. 50%.”
“Not so fast. Consider the following two bets:”
Pay 50¢ to be payed $1.10 if the coin came up heads
Pay 50¢ to be payed $1.10 if the coin came up tails
“If you’re a Bayesian, then for any assignment of credence to H, you’ll want to take at least one of these bets. For example, if your credence of H is 50% then each bet has a payoff of 5¢. But if you pick any arbitrary credence out of your confidence interval then at least one of these bets will have positive expected value.
On the other hand, I’m maximizing the minimum expected utility. Given bet (1), I notice that perhaps the probability of H is only 40%, in which case the expected utility of bet (1) is −6¢, so I reject it. Given bet (2), I notice that perhaps the probability of H is 60%, in which case the expected utility of bet (2) is −6¢, so I reject that too.”
“Uh”, I replied, “you do understand that I’ll be richer than you, right? Why ain’t you rich?”
“Don’t be so sure”, he answered. “I reject each bet individually, but I gladly accept the pair together, and walk away with 10¢. You’re only richer if bets can be retracted, and that’s somewhat of unreasonable. Besides, I do better than you in the worst case.”
Something about this felt fishy to me, and I objected halfheartedly. It’s all well and good to say you don’t maximize utility for one reason or another, but when somebody tells me that they actually maximize “minimum expected utility”, my first inclination is to tell them that they’ve misplaced their “utility” label.
Furthermore, every choice in life can be viewed as a bet about which available action will lead to the best outcome, and on this view, it is quite reasonable to expect that many bets will be “retracted” (e.g., the opportunity will pass).
Still, these complaints are rather weak, and my friend had presented a consistent alternative viewpoint that came from completely outside of my hypothesis space (and which he backed up with a number of references). The least I could do was grant it my honest consideration.
And as it turns out, there are several consistent arguments for maximizing minimum expected utility.
The Ellsberg Paradox
Consider the Ellsberg “Paradox”. There is an urn containing 90 balls. 30 of the balls are red, and the other 60 are either black or yellow. You don’t know how many of the 60 balls are black: it may be zero, it may be 60, it may be anywhere in between.
I am about to draw balls out of the urn and pay you according to their color. You get to choose how I pay out, but you have to pick between two payoff structures:
1a) I pay you $100 if I draw a red ball.
1b) I pay you $100 if I draw a black ball.
How do you choose? (I’ll give you a moment to pick.)
Afterwards, we play again with a second urn (which also has 30 red balls and 60 either-black-or-yellow balls), but this time, you have to choose between the following two payoff structures:
2a) I pay you $100 if I draw a red or yellow ball.
2b) I pay you $100 if I draw a black or yellow ball.
How do you choose? (I’ll give you a moment to pick.)
A perfect Bayesian (with no reason to believe that the 60 balls are more likely to be black than yellow) is indifferent between these pairs. However, most people prefer 1a to 1b, but also prefer 2b to 2a.
These preferences seem strange through a Bayesian lens, given that the b bets are just the a bets altered to also pay out on yellow balls as well. Why do people’s preferences flip when you add a payout on yellow balls to the mix?
One possible answer is that people have ambiguity aversion. People prefer 1a to 1b because 1a guarantees 30:60 odds (while selecting 1b when faced with an urn containing only yellow balls means that you have no chance of being paid at all). People prefer 2b to 2a because 2b guarantees 60:30 odds, while 2a may be as bad as 30:60 odds when facing the urn with no yellow balls.
If you reason in this way (and I, for one, feel the allure) then you are ambiguity averse.
And if you’re ambiguity averse, then you have preferences where a perfect Bayesian reasoner does not, and it looks a little bit like you’re maximizing minimum expected utility.
Three games of tennis
Gärdenfors and Sahlin discuss this problem in their paper Unreliable Probabilities, Risk Taking, and Decision Making
It seems to us […] that it is possible to find decision situations which are identical in all the respects relevant to the strict Bayesian, but which nevertheless motivate different decisions.
These are the people who coined the decision rule of maximizing minimum expected utility (“the MMEU rule”), and it’s worth understanding the example that motivates their argument.
Consider three tennis games each about to be played: the balanced game, the mysterious game, and the unbalanced game.
The balanced game will be played between two players Loren and Lauren who are very evenly matched. You happen to know that both players are well-rested, that they are in good health, and that they are each at the top of their mental game. Neither you nor anyone else has information that makes one of them seem more likely to win than the other, and your credence on the event “Loren wins” is 50%.
The mysterious game will be played between John and Michael, about whom you know nothing. On priors, it’s likely to be a normal tennis game where the players are matched as evenly as average. One player might be a bit better than the other, but you don’t know which. Your credence on the event “John wins” is 50%.
The unbalanced game will be played between Anabel and Zara. You don’t know who is better at tennis, but you have heard that one of them is far better than the other, and know that everybody considers the game to be a sure thing, with the outcome practically already decided. However, you’re not sure whether Anabel or Zara is the superior player, so your credence on the event “Anabel wins” is 50%.
A perfect Bayesian would be indifferent between a bet with 1:1 odds on Loren, a bet with 1:1 odds on John, and a bet with 1:1 odds on Anabel. Yet people are likely to prefer 1:1 bets on the balanced game. This is not necessarily a bias: people may rationally prefer the bet on the balanced game. This seems to imply that Bayesian expected utility maximization is not an idealization of the human reasoning process.
As these tennis games illustrate, humans treat different types of uncertainty differently. This motivates the distinction between “normal” uncertainty and “Knightian” uncertainty: we treat them differently, specifically by being averse to the latter.
The tennis games show humans displaying preferences where a Bayesian would be indifferent. On the view of Gärdenfors and Sahlin, this means that Bayesian expected utility maximization can’t capture actual human preferences; humans actually want to have preferences where Bayesians cannot. How, then, should we act? If Bayesian expected utility maximization does not capture an idealization of our intended behavior, what decision rule should we be approximating?
Gärdenfors and Sahlin propose acting such that in the worst case you still do pretty well. Specifically, they suggest maximizing the minimum expected utility given our Knightian uncertainty. This idea is discussed in the paper Unreliable Probabilities, Risk Taking, and Decision Making, which further motivates this new decision rule, which I’ll refer to as the “MMEU rule”.
We have now seen three scenarios (the Ellsburg urn, the tennis games, and Sir Percy’s coin toss) where the Bayesian decision rule of ‘maximize expected utility’ seems insufficient.
In the Ellsberg paradox, most people display an aversion to ambiguity, even though a Bayesian agent (with a neutral prior) is indifferent.
In the three tennis games, people act as if they’re trying to maximize their utility in the least convenient world, and thus they allow different types of uncertainty (whether Anabel is the stronger player vs whether Loren will win the balanced game) to affect their actions in different ways.
Most alarmingly, in the coin toss game, we see Sir Percy rejecting both bets (1) and (2) but accepting their conjunction. Sir Percy knows that his expected utility is lower, but seems to have decided that this is acceptable given his preferences about ambiguity (using reasoning that is not obviously flawed). Sir Percy acts like he has a credence interval, and there is simply no credence that a Bayesian agent can assign to H such that the agent acts as Sir Percy prefers.
All these arguments suggest that there are rational preferences that the strict Bayesian framework cannot capture, and so perhaps expected utility maximization is not always rational.
Reasons for skepticism
Let’s not throw expected utility maximization out the window at the first sign of trouble. While it surely seems like humans have a gut-level aversion to ambiguity, there are a number of factors that explain the phenomenon without sacrificing expected utility maximization.
There are some arguments in favor of using the MMEU rule, but the real arguments are easily obscured by a number of fake arguments. For example, some people might prefer a bet on the balanced tennis game over the unbalanced tennis game for reasons completely unrelated to ambiguity aversion: when considering the arguments in favor of ambiguity aversion, it is important to separate out the preferences that Bayesian reasoning can capture from the preferences it cannot.
Below are four cases where it may look like humans are acting ambiguity averse, but where Bayesian expected utility maximizers can (and do) display the same preferences.
Caution. If you enjoy bets for their own sake, and someone comes up to you offering 1:1 odds on Lauren in the balanced tennis game, then you are encouraged to take the bet.
If, however, a cheerful bookie comes up to you offering 1:1 odds on Zara in the unbalanced game, then the first thing you should do is laugh at them, and the second thing you should do is update your credence that Zara will lose.
Why? Because in the unbalanced game, one of the players is much better than the other, and the bookie might know which. If the bookie, hearing that you have no idea whether Anabel is better or worse than Zara, offers you a bet with 1:1 odds in favor of Zara, then this is pretty good evidence that Zara is the worse player.
In fact, if you’re operating under the assumption that anyone offering you a bet thinks that they are going to make money, then even as a Bayesian expected utility maximizer you should be leery of people offering bets about the mysterious game or the unbalanced game. Actual bets are usually offered to people by other people, and people tend to only offer bets that they expect to win. It’s perfectly natural to assume that the bookie is adversarial, and given this assumption, a strict Bayesian will also refuse bets on the unbalanced game.
Similarly, in the Ellsberg game, if a Bayesian agent believes that the person offering the bet is adversarial and gets to choose how many black balls there are, then the Bayesian will pick bets 1a and 2b.
Humans are naturally inclined to be suspicious of bets. Bayesian reasoners with those same suspicions are averse to many bets in a way that looks a lot like ambiguity aversion. It’s easy to look at a bet on the unbalanced game and feel a lot of suspicion and then, upon hearing that a Bayesian has no preferences in the matter, decide that you don’t want to be a Bayesian. But a Bayesian with your suspicions will also avoid bets on the unbalanced game, and it’s important to separate suspicion from ambiguity aversion.
Risk aversion. Most people would prefer a certainty of $1 billion to a 50% chance of $10 billion. This is not usually due to ambiguity aversion, though: dollars are not utils, and preferences are not generally linear in dollars. You can prefer $1 billion with certainty to a chance of $10 billion on grounds of risk aversion, without ever bringing ambiguity aversion into the picture.
The Ellsberg urn and the tennis games are examples that target ambiguity aversion explicitly, but be careful not to take these examples to heart and run around claiming that your prefer a certainty of $1 billion to a chance of $10 billion because you’re ambiguity averse. Humans are naturally very risk-averse, so we should expect that most cases of apparent ambiguity aversion are actually risk aversion. Remember that a failure to maximize expected dollars does not imply a failure to maximize expected utility.
Loss aversion. When you consider a bet on the balanced game, you might visualize a tight and thrilling match where you won’t know whether you won the bet until the bitter end. When you consider a bet on the unbalanced game, you might visualize a match where you immediately figure out whether you won or lost, and then you have to sit through a whole boring tennis game either bored and waiting to collect your money (if you chose correctly) or with that slow sinking feeling of loss as you realize that you don’t have a chance (if you chose incorrectly).
Because humans are strongly loss averse, sitting through a game where you know you’ve lost is more bad than sitting through a game where you know you’ve won is good. In other words, ambiguity may be treated as disutility. The expected utility of a bet for money in the unbalanced game may be less than a similar bet on the balanced game: the former bet has more expected negative feelings associated with it, and thus less expected utility.
This is a form of ambiguity aversion, but this portion of ambiguity aversion is a known bias that should be dealt with, not a sufficient reason to abandon expected utility maximization.
Possibility compression. The three tennis games actually are different, and the ‘strict Bayesian’ does treat them differently. Three Bayesians sitting in the stands before each of the three tennis games all expect different experiences. The Bayesian at the balanced game expects to see a close match. The Bayesian at the mysterious game expects the game to be fairly average. The Bayesian at the unbalanced game expects to see a wash.
When we think about these games, it doesn’t feel like they all yield the same probability distributions over futures, and that’s because they don’t, even in a Bayesian.
When you’re forced to make a bet only about whether the 1st player will win, you’ve got to project your distribution over all futures (which includes information about how exciting the game will be and so on) onto a much smaller binary space (player 1 either wins or loses). This feels lossy because it is lossy. It should come as no surprise that many highly different distributions over futures project onto the same distribution over the much smaller binary space of whether player 1 wins or loses.
There is some temptation to accept the MMEU rule because, well, the games feel different, and Bayesians treat the bets identically, so maybe we should switch to a decision rule that treats the bets differently. Be wary of this temptation: Bayesians do treat the games differently. You don’t need “Knightian uncertainty” to capture this.
I am not trying to argue that we don’t have ambiguity aversion. Humans do in fact seem averse to ambiguity. However, much of the apparent aversion is probably a combination of suspicion, risk aversion, and loss aversion. The former is available to Bayesian reasoners, and the latter two are known biases. Insofar as your ambiguity aversion is caused by a bias, you should be trying to reduce it, not endorse it.
But for all those disclaimers, humans still exhibit ambiguity aversion.
Now, you could say that whatever aversion remains (after controlling for risk aversion, loss aversion, and suspicion) is irrational. We know that humans suffer from confirmation bias, hindsight bias, and many other biases, but we don’t try to throw expected utility maximization out the window to account for those strange preferences.
Perhaps ambiguity aversion is merely a good heuristic. In a world where people only offer you bets when the odds are stacked against you but you don’t know it yet, ambiguity aversion is a fine heuristic. Or perhaps ambiguity aversion is a useful countermeasure against the planning fallacy: if we tend to be overconfident in our predictions, then attempting to maximize utility in the least convenient world may counterbalance our overconfidence. Maybe. (Be leery of evolutionary just-so stories.)
But this doesn’t have to be the case. Even if my own ambiguity aversion is a bias, isn’t it still possible that there could exist an ambiguity-averse rational agent?
An ideal rational agent had better not have confirmation bias or hindsight bias, but it seems like you should be able to build a rational agent that disprefers ambiguity. Ambiguity aversion is about preferences, not epistemics. Even if human ambiguity aversion is a bias, shouldn’t it be possible to design a rational agent with preferences about ambiguity? This seems like a preference that a rational agent should be able to have, at least in principle.
But if a rational agent disprefers ambiguity, then it rejects bets (1) and (2) in the coin toss game, but accepts their agglomeration. And if this is so, then there is no credence it can assign to H that to make its actions consistent, so how could it possibly be a Bayesian?
What gives? Is the Bayesian framework unable to express agents with preferences about ambiguity?
And if so, do we need a different framework that can capture a broader class of “rational” agents, including maximizers of minimum expected utility?