Bounded rationality abounds in models, not explicitly defined

Last night, I did not reg­ister a patent to cure all forms of can­cer. Even though it’s prob­a­bly pos­si­ble to figure such a cure out, from ba­sic physics and maybe a down­load of eas­ily available biol­ogy re­search pa­pers.

Can we then con­clude that I don’t want can­cer to be cured – or, al­ter­na­tively, that I am patholog­i­cally mod­est and shy, and thus don’t want the money and fame that would ac­crue?

No. The cor­rect and ob­vi­ous an­swer is that I am bound­edly ra­tio­nal. And though an un­bound­edly ra­tio­nal agent – and maybe a su­per­in­tel­li­gence – could figure out a cure for can­cer from first prin­ci­ples, poor limited me cer­tainly can’t.

Model­ling bounded ra­tio­nal­ity is tricky, and it is of­ten ac­com­plished by ar­tifi­cially limit­ing the ac­tion set. Many eco­nomic mod­els fea­ture agents that are as­sumed to be fully ra­tio­nal, but who are re­stricted to choos­ing be­tween a tiny set of pos­si­ble goods or lot­ter­ies. They don’t have the op­tions of de­vel­op­ing new tech­nolo­gies, rous­ing the pop­u­la­tion to re­bel­lion, go­ing on­line and fish­ing around for func­tional sub­sti­tutes, found­ing new poli­ti­cal move­ments, beg­ging, befriend­ing peo­ple who already have the de­sired goods, set­ting up GoFundMe pages, and so on.

There’s noth­ing wrong with mod­el­ling bounded ra­tio­nal­ity via ac­tion set re­stric­tion, as long as we’re aware of what we’re do­ing. In par­tic­u­lar, we can’t naively con­clude that be­cause a such a model fits with ob­ser­va­tion, that there­fore hu­mans ac­tu­ally are fully ra­tio­nal agents. In par­tic­u­lar, though economists are right that hu­mans are more ra­tio­nal than we might naively sup­pose, think­ing of us as ra­tio­nal, or “mostly ra­tio­nal”, is a colos­sally er­ro­neous way of think­ing. In terms of achiev­ing our goals, as com­pared with a ra­tio­nal agent, we are barely above agents act­ing ran­domly.

Another prob­lem with us­ing small ac­tion sets, is that it may lead us to think that an AI might be similarly re­stricted. That is un­likely to be the case; an in­tel­li­gent robot walk­ing around would cer­tainly have ac­cess to ac­tions that no hu­man would, and pos­si­bly ones we couldn’t eas­ily imag­ine.

Fi­nally, though ac­tion set re­duc­tion can work well in toy mod­els, it is wrong about the world and about hu­mans. So as we make more and more so­phis­ti­cated mod­els, there will come a time when we have to dis­card it, and tackle head-on the difficult is­sue of defin­ing bounded ra­tio­nal­ity prop­erly. And it’s mainly for this last point I’m writ­ing this post; we’ll never see the ne­ces­sity of bet­ter ways of defin­ing bounded ra­tio­nal­ity, un­less we re­al­ise that mod­el­ling it via ac­tion set re­stric­tion is a) com­mon, b) use­ful, and c) wrong.