Placing Yourself as an Instance of a Class

There’s an intuition that I have, which I think informs my opinion on subjective probabilities, game theory, and many related matters: that part of what separates a foolish decision from a wise one is whether you treat it as an isolated instance or as one of a class of similar decisions.

A simple case: someone doing something for the first time (first date, first job interview, etc) vs someone who has done it many times and “knows how these things go”. Surprise events to the greenhorn are tired stereotypes for the old hand. But, sometimes, we can short-circuit this process and react wisely to a situation without long experience and hard knocks.

For example, if a person is trying to save money but sees a doodad they’d like to buy, the fool reason as follows: “It’s just this one purchase. The amount of money isn’t very consequential to my overall budget. I can just save a little more in other ways and I’ll meet my target.” The wise person reasons as follows: “If I make this purchase now, I will similarly allow myself to make exceptions to my money-saving rule later, until the exception becomes the rule and I spend all my money. So, even though the amount of money here isn’t so large, I prefer to follow a general policy of saving, which implies saving in this particular case.” A very wise person may reason a bit more cleverly: “I can make impulse purchases if they pass a high bar, such that I actually only let a few dollars of unplanned spending past the bar every week on average. How rare is it that a purchase opportunity costing this much is at least this appealing?” **does a quick check and usually doesn’t buy the thing, but sometimes does, when it is worth it**

One way to get this kind of wisdom is by spending money for a long time, and calibrating willingness-to-spend based on what seems to be happening with your bank account. This doesn’t work very well for a lot of people, because the reinforcement happens too slowly to be habit-forming. A different way is to notice the logic of the situation, and think as though you had lived it.

Similarly, although people are heavily biased to treat vivid examples from the news (airplane crashes involving celebrity deaths) or personal anecdotes from friends and family (an uncle who fell ill when a high-voltage power line was installed near his home) or worse, from strangers on the internet (the guy who got “popcorn lung” from smoking an e-cig), actually, statistics are far more powerful. (It’s true that medical anecdotes from family might be more relevant than statistics due to genetic factors shared with family members, but even so, taking the “statistical view”—looking at the statistics available, including information about genetic conditions and heritability if available, and then making reasonable guesses about yourself based on your family—will be better than just viewing your situation in isolation.)

I won’t lecture much on the implications of that—most readers will be familiar with the availability heuristic, the base-rate fallacy, and scope insensitivity. Here, I just want to point out that putting more credence in numbers than vivid examples is an instance of the pattern I’m pointing at: placing your decision as an instance of a class, rather than seeing it in isolation.

In an abstract sense, the statistical view of any given decision—the view of the decision as part of a class of relevantly similar decisions—is “more real” than an attempt to view it in isolation as a unique moment. Not because it’s actually more real, but because this view is closer to your decision. Humans may exist in a single, deterministic universe—but we decide in statistical space, where outcomes come in percentages.

When the weatherman says “70% chance of rain” in your area, but it is already raining outside, you know you’re one of the 70% -- he’s speaking to an area, and giving the percent of viewers in the area who will experience rain. He can’t just say 100% for those viewers who will experience rain and 0% for those who won’t. Similarly, when you make a decision, you’re doing it in a general area—you can’t just decide to drive when you won’t crash and avoid driving when you will (though you can split up your decision by some relevant factors and avoid driving when risk is high).

This exact same intuition, of course, supports timeless/​updateless decision theory even more directly than it supports probabilism. Perhaps some future generation will regard the idea of timeless/​updateless decision-making as more fundamental and obvious than the doctrine of probabilism, and wonder why subjective probability was invented first.