Lawful Uncertainty

In Ra­tional Choice in an Uncer­tain World, Robyn Dawes de­scribes an ex­per­i­ment by Tver­sky:1

Many psy­cholog­i­cal ex­per­i­ments were con­ducted in the late 1950s and early 1960s in which sub­jects were asked to pre­dict the out­come of an event that had a ran­dom com­po­nent but yet had base-rate pre­dictabil­ity—for ex­am­ple, sub­jects were asked to pre­dict whether the next card the ex­per­i­menter turned over would be red or blue in a con­text in which 70% of the cards were blue, but in which the se­quence of red and blue cards was to­tally ran­dom.

In such a situ­a­tion, the strat­egy that will yield the high­est pro­por­tion of suc­cess is to pre­dict the more com­mon event. For ex­am­ple, if 70% of the cards are blue, then pre­dict­ing blue on ev­ery trial yields a 70% suc­cess rate.

What sub­jects tended to do in­stead, how­ever, was match prob­a­bil­ities—that is, pre­dict the more prob­a­ble event with the rel­a­tive fre­quency with which it oc­curred. For ex­am­ple, sub­jects tended to pre­dict 70% of the time that the blue card would oc­cur and 30% of the time that the red card would oc­cur. Such a strat­egy yields a 58% suc­cess rate, be­cause the sub­jects are cor­rect 70% of the time when the blue card oc­curs (which hap­pens with prob­a­bil­ity .70) and 30% of the time when the red card oc­curs (which hap­pens with prob­a­bil­ity .30); (.70×.70) + (.30×.30) = .58.

In fact, sub­jects pre­dict the more fre­quent event with a slightly higher prob­a­bil­ity than that with which it oc­curs, but do not come close to pre­dict­ing its oc­cur­rence 100% of the time, even when they are paid for the ac­cu­racy of their pre­dic­tions . . . For ex­am­ple, sub­jects who were paid a nickel for each cor­rect pre­dic­tion over a thou­sand tri­als . . . pre­dicted [the more com­mon event] 76% of the time.

Do not think that this ex­per­i­ment is about a minor flaw in gam­bling strate­gies. It com­pactly illus­trates the most im­por­tant idea in all of ra­tio­nal­ity.

Sub­jects just keep guess­ing red, as if they think they have some way of pre­dict­ing the ran­dom se­quence. Of this ex­per­i­ment Dawes goes on to say, “De­spite feed­back through a thou­sand tri­als, sub­jects can­not bring them­selves to be­lieve that the situ­a­tion is one in which they can­not pre­dict.”

But the er­ror must go deeper than that. Even if sub­jects think they’ve come up with a hy­poth­e­sis, they don’t have to ac­tu­ally bet on that pre­dic­tion in or­der to test their hy­poth­e­sis. They can say, “Now if this hy­poth­e­sis is cor­rect, the next card will be red”—and then just bet on blue. They can pick blue each time, ac­cu­mu­lat­ing as many nick­els as they can, while men­tally not­ing their pri­vate guesses for any pat­terns they thought they spot­ted. If their pre­dic­tions come out right, then they can switch to the newly dis­cov­ered se­quence.

I wouldn’t fault a sub­ject for con­tin­u­ing to in­vent hy­pothe­ses—how could they know the se­quence is truly be­yond their abil­ity to pre­dict? But I would fault a sub­ject for bet­ting on the guesses, when this wasn’t nec­es­sary to gather in­for­ma­tion, and liter­ally hun­dreds of ear­lier guesses had been dis­con­firmed.

Can even a hu­man be that over­con­fi­dent?

I would sus­pect that some­thing sim­pler is go­ing on—that the all-blue strat­egy just didn’t oc­cur to the sub­jects.

Peo­ple see a mix of mostly blue cards with some red, and sup­pose that the op­ti­mal bet­ting strat­egy must be a mix of mostly blue cards with some red.

It is a coun­ter­in­tu­itive idea that, given in­com­plete in­for­ma­tion, the op­ti­mal bet­ting strat­egy does not re­sem­ble a typ­i­cal se­quence of cards.

It is a coun­ter­in­tu­itive idea that the op­ti­mal strat­egy is to be­have lawfully, even in an en­vi­ron­ment that has ran­dom el­e­ments.

It seems like your be­hav­ior ought to be un­pre­dictable, just like the en­vi­ron­ment—but no! A ran­dom key does not open a ran­dom lock just be­cause they are “both ran­dom.”

You don’t fight fire with fire; you fight fire with wa­ter. But this thought in­volves an ex­tra step, a new con­cept not di­rectly ac­ti­vated by the prob­lem state­ment, and so it’s not the first idea that comes to mind.

In the dilemma of the blue and red cards, our par­tial knowl­edge tells us—on each and ev­ery round—that the best bet is blue. This ad­vice of our par­tial knowl­edge is the same on ev­ery sin­gle round. If 30% of the time we go against our par­tial knowl­edge and bet on red in­stead, then we will do worse thereby—be­cause now we’re be­ing out­right stupid, bet­ting on what we know is the less prob­a­ble out­come.

If you bet on red ev­ery round, you would do as badly as you could pos­si­bly do; you would be 100% stupid. If you bet on red 30% of the time, faced with 30% red cards, then you’re mak­ing your­self 30% stupid.

When your knowl­edge is in­com­plete—mean­ing that the world will seem to you to have an el­e­ment of ran­dom­ness—ran­dom­iz­ing your ac­tions doesn’t solve the prob­lem. Ran­dom­iz­ing your ac­tions takes you fur­ther from the tar­get, not closer. In a world already foggy, throw­ing away your in­tel­li­gence just makes things worse.

It is a coun­ter­in­tu­itive idea that the op­ti­mal strat­egy can be to think lawfully, even un­der con­di­tions of un­cer­tainty .

And so there are not many ra­tio­nal­ists, for most who per­ceive a chaotic world will try to fight chaos with chaos. You have to take an ex­tra step, and think of some­thing that doesn’t pop right into your mind, in or­der to imag­ine fight­ing fire with some­thing that is not it­self fire.

You have heard the un­en­light­ened ones say, “Ra­tion­al­ity works fine for deal­ing with ra­tio­nal peo­ple, but the world isn’t ra­tio­nal.” But faced with an ir­ra­tional op­po­nent, throw­ing away your own rea­son is not go­ing to help you . There are lawful forms of thought that still gen­er­ate the best re­sponse, even when faced with an op­po­nent who breaks those laws. De­ci­sion the­ory does not burst into flames and die when faced with an op­po­nent who di­s­obeys de­ci­sion the­ory.

This is no more ob­vi­ous than the idea of bet­ting all blue, faced with a se­quence of both blue and red cards. But each bet that you make on red is an ex­pected loss, and so too with ev­ery de­par­ture from the Way in your own think­ing.

How many Star Trek epi­sodes are thus re­futed? How many the­o­ries of AI?

1 Amos Tver­sky and Ward Ed­wards, “In­for­ma­tion ver­sus Re­ward in Bi­nary Choices,” Jour­nal of Ex­per­i­men­tal Psy­chol­ogy 71, no. 5 (1966): 680–683. See also Yaa­cov Schul and Ruth Mayo, “Search­ing for Cer­tainty in an Uncer­tain World: The Difficulty of Giv­ing Up the Ex­pe­ri­en­tial for the Ra­tional Mode of Think­ing,” Jour­nal of Be­hav­ioral De­ci­sion Mak­ing 16, no. 2 (2003): 93–106.