Harmful Options

Previously in series: Living By Your Own Strength

Barry Schwartz’s The Paradox of Choice—which I haven’t read, though I’ve read some of the research behind it—talks about how offering people more choices can make them less happy.

A simple intuition says this shouldn’t ought to happen to rational agents: If your current choice is X, and you’re offered an alternative Y that’s worse than X, and you know it, you can always just go on doing X. So a rational agent shouldn’t do worse by having more options. The more available actions you have, the more powerful you become—that’s how it should ought to work.

For example, if an ideal rational agent is initially forced to take only box B in Newcomb’s Problem, and is then offered the additional choice of taking both boxes A and B, the rational agent shouldn’t regret having more options. Such regret indicates that you’re “fighting your own ritual of cognition” which helplessly selects the worse choice once it’s offered you.

But this intuition only governs extremely idealized rationalists, or rationalists in extremely idealized situations. Bounded rationalists can easily do worse with strictly more options, because they burn computing operations to evaluate them. You could write an invincible chess program in one line of Python if its only legal move were the winning one.

Of course Schwartz and co. are not talking about anything so pure and innocent as the computing cost of having more choices.

If you’re dealing, not with an ideal rationalist, not with a bounded rationalist, but with a human being—

Say, would you like to finish reading this post, or watch this surprising video instead?

Schwartz, I believe, talks primarily about the decrease in happiness and satisfaction that results from having more mutually exclusive options. Before this research was done, it was already known that people are more sensitive to losses than to gains, generally by a factor of between 2 and 2.5 (in various different experimental scenarios). That is, the pain of losing something is between 2 and 2.5 times as worse as the joy of gaining it. (This is an interesting constant in its own right, and may have something to do with compensating for our systematic overconfidence.)

So—if you can only choose one dessert, you’re likely to be happier choosing from a menu of two than a menu of fourteen. In the first case, you eat one dessert and pass up one dessert; in the latter case, you eat one dessert and pass up thirteen desserts. And we are more sensitive to loss than to gain.

(If I order dessert on a menu at all, I will order quickly and then close the menu and put it away, so as not to look at the other items.)

Not only that, but if the options have incommensurable attributes, then whatever option we select is likely to look worse because of the comparison. A luxury car that would have looked great by comparison to a Crown Victoria, instead becomes slower than the Ferrari, more expensive than the 9-5, with worse mileage than the Prius, and not looking quite as good as the Mustang. So we lose on satisfaction with the road we did take.

And then there are more direct forms of harm done by painful choices. IIRC, an experiment showed that people who refused to eat a cookie—who were offered the cookie, and chose not to take it—did worse on subsequent tests of mental performance than either those who ate the cookie or those who were not offered any cookie. You pay a price in mental energy for resisting temptation.

Or consider the various “trolley problems” of ethical philosophy—a trolley is bearing down on 5 people, but there’s one person who’s very fat and can be pushed onto the tracks to stop the trolley, that sort of thing. If you’re forced to choose between two unacceptable evils, you’ll pay a price either way. Vide Sophie’s Choice.

An option need not be taken, or even be strongly considered, in order to wreak harm. Recall the point from “High Challenge”, about how offering to do someone’s work for them is not always helping them—how the ultimate computer game is not the one that just says “YOU WIN”, forever.

Suppose your computer games, in addition to the long difficult path to your level’s goal, also had little side-paths that you could use—directly in the game, as corridors—that would bypass all the enemies and take you straight to the goal, offering along the way all the items and experience that you could have gotten the hard way. And this corridor is always visible, out of the corner of your eye.

Even if you resolutely refused to take the easy path through the game, knowing that it would cheat you of the very experience that you paid money in order to buy—wouldn’t that always-visible corridor, make the game that much less fun? Knowing, for every alien you shot, and every decision you made, that there was always an easier path?

I don’t know if this story has ever been written, but you can imagine a Devil who follows someone around, making their life miserable, solely by offering them options which are never actually taken—a “deal with the Devil” story that only requires the Devil to have the capacity to grant wishes, rather than ever granting a single one.

And what if the worse option is actually taken? I’m not suggesting that it is always a good idea for human governments to go around Prohibiting temptations. But the literature of heuristics and biases is replete with examples of reproducible stupid choices; and there is also such a thing as akrasia (weakness of will).

If you’re an agent operating from a much higher vantage point—high enough to see humans as flawed algorithms, so that it’s not a matter of second-guessing but second-knowing—then is it benevolence to offer choices that will assuredly be made wrongly? Clearly, removing all choices from someone and reducing their life to Progress Quest, is not helping them. But are we wise enough to know when we should choose? And in some cases, even offering that much of a choice, even if the choice is made correctly, may already do the harm...

Part of The Fun Theory Sequence

Next post: “Devil’s Offers

Previous post: “Free to Optimize