Humans are stunningly rational and stunningly irrational

Are hu­mans mainly ra­tio­nal or mainly ir­ra­tional? This is an old de­bate, with some poli­ti­cal im­pli­ca­tions.

But the ques­tion is ill posed, be­cause it elides the im­por­tant de­tail: com­pared with what?

Hu­mans are stun­ningly irrational

Let be some re­ward/​util­ity func­tion that en­codes hu­man prefer­ences—in­clud­ing a mix of wealth, se­cu­rity, sta­tus, hap­piness, pleas­ant lives, good friends and ex­pe­riences, ro­man­tic con­nec­tions, sex, etc… It doesn’t mat­ter too much which of the ad­e­quate ’s this is.

Then it’s clear that hu­mans are stun­ningly ir­ra­tional as com­pared with a fully ra­tio­nal -max­imiser. As I men­tioned be­fore, one of the rea­sons that we think we’re ra­tio­nal is be­cause we only con­tem­plate a tiny slice of the ac­tion space.

But if I were un­bound­edly ra­tio­nal, I could do many things. I could re-en­g­ineer biol­ogy from physics and some re­search ar­ti­cles, and cre­ate cures for mul­ti­ple dis­eases, be­com­ing in­cred­ibly rich and re­spected. I could launch a me­dia cam­paign to make my­self very pop­u­lar. I could cre­ate tai­lored drugs to make my­self max­i­mally happy and ex­tremely pro­duc­tive.

Hell, I could just pro­gram a su­per­in­tel­li­gence to op­ti­mise the world and max­imise . Un­bound­edly ra­tio­nal means that I’d be more effec­tive than any such su­per­in­tel­li­gence, so I’d achieve a lot.

So it seems that, if a ran­dom agent gets , and an un­bound­edly ra­tio­nal -max­imiser would ex­pect to get , then a hu­man would get a value of that is ut­terly tiny—maybe or smaller. On the scale of ran­dom­ness to full ra­tio­nal­ity, hu­mans are al­most in­dis­t­in­guish­able from the ran­dom­ness. We’re stun­ningly ir­ra­tional.

Hu­mans are stun­ningly rational

To see how ra­tio­nal hu­mans are, let’s zoom in on that differ­ence be­tween hu­mans and ran­dom­ness. Sup­pose that it takes twenty ac­tions to walk to a door (it ac­tu­ally takes far more). Then ev­ery time a hu­man walks to a door, they add at least twenty bits of in­for­ma­tion as com­pared with a ran­dom agent. Add in the move­ments that hu­mans do ev­ery day, the plan­ning, the eat­ing, the con­ver­sa­tions, and we have thou­sands of bits of in­for­ma­tion; we oc­cupy a tiny slice of pos­si­bil­ity space, and have ex­treme op­ti­mi­sa­tion pow­ers.

And that de­gree of op­ti­mi­sa­tion is similar to that of an­i­mals. Hu­mans have longer term plan­ning, rather ex­treme so­cial skills, the abil­ity to in­te­grate vast amount of new in­for­ma­tion and come up with new ideas and de­signs. We di­rect the world to ut­terly tiny ar­eas of out­come space.

Maybe we’ve di­rected the world to the top of out­come space, or an even smaller space. We’re stun­ningly ra­tio­nal.

In conclusion

So, are hu­mans ra­tio­nal? Well, we de­ploy a stun­ningly high amount of op­ti­mi­sa­tion power to achieve our goals. On the other hand, we fall stun­ningly short of those goals.

So our ra­tio­nal­ity de­pends on what we’re be­ing com­pared with (eg ran­dom policy vs un­bounded ra­tio­nal­ity) and what mea­sure is be­ing used (eg op­ti­mi­sa­tion power vs ex­pected util­ity).

There is no sim­ple in­trin­sic mea­sure as to “how ra­tio­nal” hu­mans are. In ques­tions where this is rele­vant, the com­par­i­son and mea­sure have to be speci­fied.