Harmful Options

Pre­vi­ously in se­ries: Liv­ing By Your Own Strength

Barry Schwartz’s The Para­dox of Choice—which I haven’t read, though I’ve read some of the re­search be­hind it—talks about how offer­ing peo­ple more choices can make them less happy.

A sim­ple in­tu­ition says this shouldn’t ought to hap­pen to ra­tio­nal agents: If your cur­rent choice is X, and you’re offered an al­ter­na­tive Y that’s worse than X, and you know it, you can always just go on do­ing X. So a ra­tio­nal agent shouldn’t do worse by hav­ing more op­tions. The more available ac­tions you have, the more pow­er­ful you be­come—that’s how it should ought to work.

For ex­am­ple, if an ideal ra­tio­nal agent is ini­tially forced to take only box B in New­comb’s Prob­lem, and is then offered the ad­di­tional choice of tak­ing both boxes A and B, the ra­tio­nal agent shouldn’t re­gret hav­ing more op­tions. Such re­gret in­di­cates that you’re “fight­ing your own rit­ual of cog­ni­tion” which hel­plessly se­lects the worse choice once it’s offered you.

But this in­tu­ition only gov­erns ex­tremely ideal­ized ra­tio­nal­ists, or ra­tio­nal­ists in ex­tremely ideal­ized situ­a­tions. Bounded ra­tio­nal­ists can eas­ily do worse with strictly more op­tions, be­cause they burn com­put­ing op­er­a­tions to eval­u­ate them. You could write an in­vin­cible chess pro­gram in one line of Python if its only le­gal move were the win­ning one.

Of course Schwartz and co. are not talk­ing about any­thing so pure and in­no­cent as the com­put­ing cost of hav­ing more choices.

If you’re deal­ing, not with an ideal ra­tio­nal­ist, not with a bounded ra­tio­nal­ist, but with a hu­man be­ing—

Say, would you like to finish read­ing this post, or watch this sur­pris­ing video in­stead?

Schwartz, I be­lieve, talks pri­mar­ily about the de­crease in hap­piness and satis­fac­tion that re­sults from hav­ing more mu­tu­ally ex­clu­sive op­tions. Be­fore this re­search was done, it was already known that peo­ple are more sen­si­tive to losses than to gains, gen­er­ally by a fac­tor of be­tween 2 and 2.5 (in var­i­ous differ­ent ex­per­i­men­tal sce­nar­ios). That is, the pain of los­ing some­thing is be­tween 2 and 2.5 times as worse as the joy of gain­ing it. (This is an in­ter­est­ing con­stant in its own right, and may have some­thing to do with com­pen­sat­ing for our sys­tem­atic over­con­fi­dence.)

So—if you can only choose one dessert, you’re likely to be hap­pier choos­ing from a menu of two than a menu of four­teen. In the first case, you eat one dessert and pass up one dessert; in the lat­ter case, you eat one dessert and pass up thir­teen desserts. And we are more sen­si­tive to loss than to gain.

(If I or­der dessert on a menu at all, I will or­der quickly and then close the menu and put it away, so as not to look at the other items.)

Not only that, but if the op­tions have in­com­men­su­rable at­tributes, then what­ever op­tion we se­lect is likely to look worse be­cause of the com­par­i­son. A lux­ury car that would have looked great by com­par­i­son to a Crown Vic­to­ria, in­stead be­comes slower than the Fer­rari, more ex­pen­sive than the 9-5, with worse mileage than the Prius, and not look­ing quite as good as the Mus­tang. So we lose on satis­fac­tion with the road we did take.

And then there are more di­rect forms of harm done by painful choices. IIRC, an ex­per­i­ment showed that peo­ple who re­fused to eat a cookie—who were offered the cookie, and chose not to take it—did worse on sub­se­quent tests of men­tal perfor­mance than ei­ther those who ate the cookie or those who were not offered any cookie. You pay a price in men­tal en­ergy for re­sist­ing temp­ta­tion.

Or con­sider the var­i­ous “trol­ley prob­lems” of eth­i­cal philos­o­phy—a trol­ley is bear­ing down on 5 peo­ple, but there’s one per­son who’s very fat and can be pushed onto the tracks to stop the trol­ley, that sort of thing. If you’re forced to choose be­tween two un­ac­cept­able evils, you’ll pay a price ei­ther way. Vide So­phie’s Choice.

An op­tion need not be taken, or even be strongly con­sid­ered, in or­der to wreak harm. Re­call the point from “High Challenge”, about how offer­ing to do some­one’s work for them is not always helping them—how the ul­ti­mate com­puter game is not the one that just says “YOU WIN”, for­ever.

Sup­pose your com­puter games, in ad­di­tion to the long difficult path to your level’s goal, also had lit­tle side-paths that you could use—di­rectly in the game, as cor­ri­dors—that would by­pass all the en­e­mies and take you straight to the goal, offer­ing along the way all the items and ex­pe­rience that you could have got­ten the hard way. And this cor­ri­dor is always visi­ble, out of the cor­ner of your eye.

Even if you re­s­olutely re­fused to take the easy path through the game, know­ing that it would cheat you of the very ex­pe­rience that you paid money in or­der to buy—wouldn’t that always-visi­ble cor­ri­dor, make the game that much less fun? Know­ing, for ev­ery alien you shot, and ev­ery de­ci­sion you made, that there was always an eas­ier path?

I don’t know if this story has ever been writ­ten, but you can imag­ine a Devil who fol­lows some­one around, mak­ing their life mis­er­able, solely by offer­ing them op­tions which are never ac­tu­ally taken—a “deal with the Devil” story that only re­quires the Devil to have the ca­pac­ity to grant wishes, rather than ever grant­ing a sin­gle one.

And what if the worse op­tion is ac­tu­ally taken? I’m not sug­gest­ing that it is always a good idea for hu­man gov­ern­ments to go around Pro­hibit­ing temp­ta­tions. But the liter­a­ture of heuris­tics and bi­ases is re­plete with ex­am­ples of re­pro­ducible stupid choices; and there is also such a thing as akra­sia (weak­ness of will).

If you’re an agent op­er­at­ing from a much higher van­tage point—high enough to see hu­mans as flawed al­gorithms, so that it’s not a mat­ter of sec­ond-guess­ing but sec­ond-know­ing—then is it benev­olence to offer choices that will as­suredly be made wrongly? Clearly, re­mov­ing all choices from some­one and re­duc­ing their life to Progress Quest, is not helping them. But are we wise enough to know when we should choose? And in some cases, even offer­ing that much of a choice, even if the choice is made cor­rectly, may already do the harm...

Part of The Fun The­ory Sequence

Next post: “Devil’s Offers

Pre­vi­ous post: “Free to Op­ti­mize