What is Rational?

Eliezer defines ra­tio­nal­ity as such:

In­stru­men­tal ra­tio­nal­ity: sys­tem­at­i­cally achiev­ing your val­ues.


In­stru­men­tal ra­tio­nal­ity, on the other hand, is about steer­ing re­al­ity— send­ing the fu­ture where you want it to go. It’s the art of choos­ing ac­tions that lead to out­comes ranked higher in your prefer­ences. I some­times call this “win­ning.”

Ex­trap­o­lat­ing from the above defi­ni­tion, we can con­clude that an act is ra­tio­nal, if it causes you to achieve your goals/​win. The is­sue with this defi­ni­tion is that we can­not eval­u­ate the ra­tio­nal­ity of an act, un­til af­ter ob­serv­ing the con­se­quences of that ac­tion. We can­not de­ter­mine if an act is ra­tio­nal with­out first car­ry­ing out the act. This is not a very use­ful defi­ni­tion, as one may want to use the ra­tio­nal­ity of an act as a guide.

Another defi­ni­tion of ra­tio­nal­ity is the one used in AI when talk­ing about ra­tio­nal agents:

For each pos­si­ble per­cept se­quence, a ra­tio­nal agent should se­lect an ac­tion that is ex­pected to max­i­mize its perfor­mance mea­sure, given the ev­i­dence pro­vided by the per­cept se­quence and what­ever built-in knowl­edge the agent has.

A pre­cept se­quence is ba­si­cally the se­quence of all per­cep­tions the agent as had from in­cep­tion to the mo­ment of ac­tion. The above defi­ni­tion is use­ful, but I don’t think it is with­out is­sue; what is ra­tio­nal for two differ­ent agents A and B, with the ex­act same goals, in the ex­act same cir­cum­stances differs. Sup­pose A in­tends to cross a road, and A checks both sides of the road, en­sures it’s clear and then at­tempts to cross. How­ever, a me­te­orite strikes at that ex­act mo­ment, and A is kil­led. A is not ir­ra­tional for at­tempt­ing to cross the road, giv­ing that t hey did not know of the me­te­orite (and thus could not have ac­counted for it). Sup­pose B has more knowl­edge than A, and thus knows that there is sub­stan­tial de­lay be­tween me­teor strikes in the vicinity, and then crosses af­ter A and safely crosses. We can­not rea­son­ably say B is more ra­tio­nal than A.

The above sce­nario doesn’t break our in­tu­itions of what is ra­tio­nal, but what about in other sce­nar­ios? What about the gam­bler who knows not of the gam­bler’s fal­lacy, and be­lieves that be­cause the die hasn’t rol­led an odd num­ber for the past n turns, that it would definitely roll odd this time (af­ter all, the prob­a­bil­ity of not rol­ling odd ). Are they then ra­tio­nal for bet­ting the ma­jor­ity of their fund on the die rol­ling odd? Let­ting what’s ra­tio­nal de­pend on the knowl­edge of the agent in­volved, leads to a very broad (and pos­si­bly use­less) no­tion of ra­tio­nal­ity. It may lead to what I call “folk ra­tio­nal­ity” (do­ing what you think would lead to suc­cess). Bar­ring a few ex­cep­tions (ex­tremes of emo­tion, com­pro­mised men­tal states, etc), most hu­mans are folk ra­tio­nal. How­ever, this folk ra­tio­nal­ity isn’t what I re­fer to when I say “ra­tio­nal”.

How then do we define what is ra­tio­nal to avoid the two is­sues I high­lighted above?