The Number Choosing Game: Against the existence of perfect theoretical rationality

In or­der to en­sure that this post de­liv­ers what it promises, I have added the fol­low­ing con­tent warn­ings:

Con­tent Notes:

Pure Hy­po­thet­i­cal Si­tu­a­tion
: The claim that perfect the­o­ret­i­cal ra­tio­nal­ity doesn’t ex­ist is re­stricted to a purely hy­po­thet­i­cal situ­a­tion. No claim is be­ing made that this ap­plies to the real world. If you are only in­ter­ested in how things ap­ply to the real world, then you may be dis­ap­pointed to find out that this is an ex­er­cise left to the reader.

Tech­ni­cal­ity Only Post: This post ar­gues that perfectly the­o­ret­i­cal ra­tio­nal­ity doesn’t ex­ist due to a tech­ni­cal­ity. If you were hop­ing for this post to de­liver more, well, you’ll prob­a­bly be dis­ap­pointed.

Con­tentious Defi­ni­tion: This post (roughly) defines perfect ra­tio­nal­ity as the abil­ity to max­imise util­ity. This is based on Wikipe­dia, which defines ra­tio­nal agents as an agent that: “always chooses to perform the ac­tion with the op­ti­mal ex­pected out­come for it­self from among all fea­si­ble ac­tions”.

We will define the num­ber choos­ing game as fol­lows. You name any sin­gle finite num­ber x. You then gain x util­ity and the game then ends. You can only name a finite num­ber, nam­ing in­finity is not al­lowed.

Clearly, the agent that names x+1 is more ra­tio­nal than the agent that names x (and be­haves the same in ev­ery other situ­a­tion). How­ever, there does not ex­ist a com­pletely ra­tio­nal agent, be­cause there does not ex­ist a num­ber that is higher than ev­ery other num­ber. In­stead, the agent who picks 1 is less ra­tio­nal than the agent who picks 2 who is less ra­tio­nal than the agent who picks 3 and so on un­til in­finity. There ex­ists an in­finite se­ries of in­creas­ingly ra­tio­nal agents, but no agent who is perfectly ra­tio­nal within this sce­nario.

Fur­ther­more, this hy­po­thet­i­cal doesn’t take place in our uni­verse, but in a hy­po­thet­i­cal uni­verse where we are all ce­les­tial be­ings with the abil­ity to choose any num­ber how­ever large with­out any ad­di­tional time or effort no mat­ter how long it would take a hu­man to say that num­ber. Since this state­ment doesn’t ap­pear to have been clear enough (judg­ing from the com­ments), we are ex­plic­itly con­sid­er­ing a the­o­ret­i­cal sce­nario and no claims are be­ing made about how this might or might not carry over to the real world. In other words, I am claiming the the ex­is­tence of perfect ra­tio­nal­ity does not fol­low purely from the laws of logic. If you are go­ing to be difficult and ar­gue that this isn’t pos­si­ble and that even hy­po­thet­i­cal be­ings can only com­mu­ni­cate a finite amount of in­for­ma­tion, we can imag­ine that there is a de­vice that pro­vides you with util­ity the longer that you speak and that the util­ity it pro­vides you is ex­actly equal to the util­ity you lose by hav­ing to go to the effort to speak, so that over­all you are in­differ­ent to the re­quired speak­ing time.

In the com­ments, MattG sug­gested that the is­sue was that this prob­lem as­sumed un­bounded util­ity. That’s not quite the prob­lem. In­stead, we can imag­ine that you can name any num­ber less than 100, but not 100 it­self. Fur­ther, as above, say­ing a long num­ber ei­ther doesn’t cost you util­ity or you are com­pen­sated for it. Re­gard­less of whether you name 99 or 99.9 or 99.9999999, you are still choos­ing a sub­op­ti­mal de­ci­sion. But if you never stop speak­ing, you don’t re­ceive any util­ity at all.

I’ll ad­mit that in our uni­verse there is a perfectly ra­tio­nal op­tion which bal­ances speak­ing time against the util­ity you gain given that we only have a finite life­time and that you want to try to avoid dy­ing in the mid­dle of speak­ing the num­ber which would re­sult in no util­ity gained. How­ever, it is still no­table that a perfectly ra­tio­nal be­ing can­not ex­ist within a hy­po­thet­i­cal uni­verse. How ex­actly this re­sult ap­plies to our uni­verse isn’t ex­actly clear, but that’s the challenge I’ll set for the com­ments. Are there any re­al­is­tic sce­nar­ios where the lack of ex­is­tence of perfect ra­tio­nal­ity has im­por­tant prac­ti­cal ap­pli­ca­tions?

Fur­ther­more, there isn’t an ob­jec­tive line be­tween ra­tio­nal and ir­ra­tional. You or I might con­sider some­one who chose the num­ber 2 to be stupid. Why not at least go for a mil­lion or a billion? But, such a per­son could have eas­ily gained a billion, billion, billion util­ity. No mat­ter how high a num­ber they choose, they could have always gained much, much more with­out any differ­ence in effort.

I’ll finish by pro­vid­ing some ex­am­ples of other games. I’ll call the first game the Ex­plod­ing Ex­po­nen­tial Coin Game. We can imag­ine a game where you can choose to flip a coin any num­ber of times. Ini­tially you have 100 util­ity. Every time it comes up heads, your util­ity triples, but if it comes up tails, you lose all your util­ity. Fur­ther­more, let’s as­sume that this agent isn’t go­ing to raise the Pas­cal’s Mug­ging ob­jec­tion. We can see that the agent’s ex­pected util­ity will in­crease the more times they flip the coin, but if they com­mit to flip­ping it un­limited times, they can’t pos­si­bly gain any util­ity. Just as be­fore, they have to pick a finite num­ber of times to flip the coin, but again there is no ob­jec­tive jus­tifi­ca­tion for stop­ping at any par­tic­u­lar point.

Another ex­am­ple, I’ll call the Un­limited Swap game. At the start, one agent has an item worth 1 util­ity and an­other has an item worth 2 util­ity. At each step, the agent with the item worth 1 util­ity can choose to ac­cept the situ­a­tion and end the game or can swap items with the other player. If they choose to swap, then the player who now has the 1 util­ity item has an op­por­tu­nity to make the same choice. In this game, wait­ing for­ever is ac­tu­ally an op­tion. If your op­po­nents all have finite pa­tience, then this is the best op­tion. How­ever, there is a chance that your op­po­nent has in­finite pa­tience too. In this case you’ll both miss out on the 1 util­ity as you will wait for­ever. I sus­pect that an agent could do well by hav­ing a chance of wait­ing for­ever, but also a chance of stop­ping af­ter a high finite num­ber. In­creas­ing this finite num­ber will always make you do bet­ter, but again, there is no max­i­mum wait­ing time.

(This seems like such an ob­vi­ous re­sult, I imag­ine that there’s ex­ten­sive dis­cus­sion of it within the game the­ory liter­a­ture some­where. If any­one has a good pa­per that would be ap­pre­ci­ated).

Link to part 2: Con­se­quences of the Non-Ex­is­tence of Ra­tion­al­ity