The Bias You Didn’t Expect

There are few places where so­ciety val­ues ra­tio­nal, ob­jec­tive de­ci­sion mak­ing as much as it val­ues it in judges. While there is a rather cyn­i­cal dis­ci­pline called le­gal re­al­ism that says the law is re­ally based on quirks of in­di­vi­d­ual psy­chol­ogy, “what the judge had for break­fast,” there’s a broad so­cial be­lief that the de­ci­sion of judges are un­bi­ased. And where they aren’t un­bi­ased, they’re bi­ased for Big, Im­por­tant, Bad rea­sons, like racism or clas­sism or poli­tics.

It turns out that le­gal re­al­ism is to­tally wrong. It’s not what the judge had for break­fast. It’s how re­cently the judge had break­fast. A a new study (me­dia cov­er­age) on Is­raeli judges shows that, when mak­ing pa­role de­ci­sions, they grant about 65% af­ter meal breaks, and al­most all the way down to 0% right be­fore breaks and at the end of the day (i.e. as far from the last break as pos­si­ble). There’s a rel­a­tively lin­ear de­cline be­tween the two points.

Think about this for a mo­ment. A tremen­dously im­por­tant de­ci­sion, de­ter­min­ing whether a per­son will go free or spend years in jail, ap­pears to be sub­stan­tially de­ter­mined by an ar­bi­trary fac­tor. Also, note that we don’t know if it’s the lack of food, the an­ti­ci­pa­tion of a break, or some other fac­tor that is re­spon­si­ble for this. More in­ter­est­ingly, we don’t know where the op­ti­mal re­sult oc­curred. It’s prob­a­bly not the near 0% at the end of each work pe­riod. But is it the post-break high of 65%? Or were judges be­ing too nice? We know there was bias, but we still don’t know when bias oc­curred.

There are at least two les­sons from this. The lit­tle, ob­vi­ous one is to be aware of one’s own phys­i­cal limi­ta­tions. Avoid mak­ing big de­ci­sions when tired or hun­gry—though this doesn’t mean you should try to make de­ci­sions right af­ter eat­ing. For par­tic­u­larly im­por­tant de­ci­sions, con­sider con­tem­plat­ing them at differ­ent times, if you can. Think about one thing Mon­day morn­ing, then Wed­nes­day af­ter­noon, then Satur­day evening, go­ing only to the point of get­ting an over­all feel for an an­swer, and not to the point of re­ally mak­ing a solid con­clu­sion. Take notes, and then com­pare them. This may not work perfectly, but it may help you re­al­ize in­con­sis­ten­cies, which could help. For big ques­tions, the wis­dom of crowds may be helpful—un­less it’s been a while since most of the crowd had break­fast.

The big­ger les­son is one of hu­mil­ity. This pro­vides rather stark ev­i­dence that our de­ci­sions are not un­der our con­trol to the ex­tent we be­lieve. We can be in­fluenced by fac­tors we don’t even sus­pect. Even know­ing we have been bi­ased, we may still be un­able to iden­tify what the cor­rect an­swer was. While us­ing for­mal rules and logic may be one of the best ap­proaches to min­i­miz­ing such er­rors, even for­mal rules can fail when ap­plied by bi­ased agents. The biggest, most con­demnable bi­ases—like racism—are in some ways less dan­ger­ous, be­cause we know we need to look out for them. It’s the bias you don’t even sus­pect that can get you. The au­thors of the study think they ba­si­cally got lucky with these re­sults—if the effect had been to make de­ci­sions ar­bi­trary rather than to in­crease re­jec­tions, this would not have shown up.

When those charged with mak­ing im­par­tial de­ci­sions that con­trol peo­ple’s lives are sub­ject to ar­bi­trary forces they never sus­pected, it shows how im­por­tant it is and much more we can do to be less wrong.