# What is Rational?

Eliezer defines ra­tio­nal­ity as such:

In­stru­men­tal ra­tio­nal­ity: sys­tem­at­i­cally achiev­ing your val­ues.

....

In­stru­men­tal ra­tio­nal­ity, on the other hand, is about steer­ing re­al­ity— send­ing the fu­ture where you want it to go. It’s the art of choos­ing ac­tions that lead to out­comes ranked higher in your prefer­ences. I some­times call this “win­ning.”

Ex­trap­o­lat­ing from the above defi­ni­tion, we can con­clude that an act is ra­tio­nal, if it causes you to achieve your goals/​win. The is­sue with this defi­ni­tion is that we can­not eval­u­ate the ra­tio­nal­ity of an act, un­til af­ter ob­serv­ing the con­se­quences of that ac­tion. We can­not de­ter­mine if an act is ra­tio­nal with­out first car­ry­ing out the act. This is not a very use­ful defi­ni­tion, as one may want to use the ra­tio­nal­ity of an act as a guide.

Another defi­ni­tion of ra­tio­nal­ity is the one used in AI when talk­ing about ra­tio­nal agents:

For each pos­si­ble per­cept se­quence, a ra­tio­nal agent should se­lect an ac­tion that is ex­pected to max­i­mize its perfor­mance mea­sure, given the ev­i­dence pro­vided by the per­cept se­quence and what­ever built-in knowl­edge the agent has.

A pre­cept se­quence is ba­si­cally the se­quence of all per­cep­tions the agent as had from in­cep­tion to the mo­ment of ac­tion. The above defi­ni­tion is use­ful, but I don’t think it is with­out is­sue; what is ra­tio­nal for two differ­ent agents A and B, with the ex­act same goals, in the ex­act same cir­cum­stances differs. Sup­pose A in­tends to cross a road, and A checks both sides of the road, en­sures it’s clear and then at­tempts to cross. How­ever, a me­te­orite strikes at that ex­act mo­ment, and A is kil­led. A is not ir­ra­tional for at­tempt­ing to cross the road, giv­ing that t hey did not know of the me­te­orite (and thus could not have ac­counted for it). Sup­pose B has more knowl­edge than A, and thus knows that there is sub­stan­tial de­lay be­tween me­teor strikes in the vicinity, and then crosses af­ter A and safely crosses. We can­not rea­son­ably say B is more ra­tio­nal than A.

The above sce­nario doesn’t break our in­tu­itions of what is ra­tio­nal, but what about in other sce­nar­ios? What about the gam­bler who knows not of the gam­bler’s fal­lacy, and be­lieves that be­cause the die hasn’t rol­led an odd num­ber for the past n turns, that it would definitely roll odd this time (af­ter all, the prob­a­bil­ity of not rol­ling odd $n \text{ times is } 2^{-n}$). Are they then ra­tio­nal for bet­ting the ma­jor­ity of their fund on the die rol­ling odd? Let­ting what’s ra­tio­nal de­pend on the knowl­edge of the agent in­volved, leads to a very broad (and pos­si­bly use­less) no­tion of ra­tio­nal­ity. It may lead to what I call “folk ra­tio­nal­ity” (do­ing what you think would lead to suc­cess). Bar­ring a few ex­cep­tions (ex­tremes of emo­tion, com­pro­mised men­tal states, etc), most hu­mans are folk ra­tio­nal. How­ever, this folk ra­tio­nal­ity isn’t what I re­fer to when I say “ra­tio­nal”.

How then do we define what is ra­tio­nal to avoid the two is­sues I high­lighted above?

• There is also one more level of ra­tio­nal­ity, which is of­ten as­sumed, but not pre­sented. I would call it the in­ner defi­ni­tion of ra­tio­nal­ity:

“Ra­tion­al­ity is a be­hav­ior which could be pre­sented by a short set of sim­ple rules”. The rules in­clude math, Byes the­o­rem, util­ity func­tion, virtues of ra­tio­nal­ity, you name it.

The main prob­lem is fol­low­ing: does “in­ner defi­ni­tion” of ra­tio­nal­ity cor­re­sponds to the “outer defi­ni­tion”, that is ra­tio­nal­ity as win­ning? That is, does know­ing cor­rect short set of rules re­sults in con­stant win­ning?

If we think that yes, then all ra­tio­nal­ity man­u­als are use­ful, as by in­stal­ling cor­rect set of shots rules, we will get perfect ra­tio­nal­ity and will start to win.

How­ever, if win­ning re­quires some­thing ex­tremely com­plex, like a very large neu­ral net, which can’t be pre­sented by a short set of rules, we need to up­date our in­ner defi­ni­tion of ra­tio­nal­ity.

For ex­am­ple, a com­plex neu­ral net may win in cat recog­ni­tion, but it doesn’t know any set of rules how to rec­og­nize a cat.

• It’s been al­most four months since I wrote this thread. I’ve started to see the out­line of an an­swer to my ques­tion. Over the course of the next year, I would be­gin doc­u­ment­ing it.

• Be­ing lucky is not be­ing ra­tio­nal. How­ever it is un­doubtable that win­ning in a lot­tery is mostly a pos­i­tive out­come and that it re­quires for you to have pur­chased the ticket which is a de­ci­sion. Some­thing that looks only at out­comes would ap­plaud the de­ci­sion to buy the ticket (per­haps un­con­di­tion­ally).

The defini­ton of in­stru­men­tal ra­tio­nal­ity is most com­monly in­voked when critisiing those that em­ploy a com­plex method­ol­ogy of choos­ing cor­rectly while the method­olody can be crit­iced or the agent had ev­i­dence that could have been con­strued to be a rea­son to aban­don the method­ol­ogy. The critism “be­fore” “in­stru­men­tal ra­tio­nal­ity would fo­cus on mak­ing an er­ror in ap­pli­caiton of a method­ol­ogy or not hav­ing any method­ol­ogy at all to make the de­ci­sion. The com­mon sen­ti­ment from these can seem like “have a method­ol­ogy and ap­ply it cor­rectly”. And it seems clear that there are bet­ter and worse method­olo­gies and one should try to ap­ply the best available. And it seems “I had a method­ol­ogy and ap­plied it” doesn’t make one to be “ra­tio­nal” (more like “dog­matic”).

It seems one coudl have a rea­son­able chance of be­ing “ra­tio­nal” if one had bad method­olo­gies if one ac­tively up­switches and up­grades their “carry on” method­ol­ogy when­ever they en­counter new ones. It seems also that as the ar­gu­ment goes on the fo­cus on metacog­ni­tion in­creases. This can be seen also to frame the pre­vi­ous critisms in a new light. Its not that un­method­olog­i­cal de­ci­sions are “un­ra­tional” per se but do­ing so means likely that you missed to pick up a good method­ol­ogy be­fore that where you here could have ap­plied to great suc­cess. So rather than “hav­ing” an method­ol­ogy its more im­por­tant to “pick up” method­olo­gies with it be­ing less es­sen­tial whether you cur­rently have or do not have a good method­ol­ogy. With con­sis­tent pick­ups you should in the fu­ture have a great qual­ity method­ol­ogy. but rather than be­ing the means its the effect.

• From Jonathan Baron’s Think­ing and de­cid­ing:

The best kind of think­ing, which we shall call ra­tio­nal think­ing, is what­ever kind of think­ing best helps peo­ple achieve their goals. If it should turn out that fol­low­ing the rules of for­mal logic leads to eter­nal hap­piness, then it is “ra­tio­nal think­ing” to fol­low the laws of logic (as­sum­ing that we all want eter­nal hap­piness). If it should turn out, on the other hand, that care­fully vi­o­lat­ing the laws of logic at ev­ery turn leads to eter­nal hap­piness, then it is these vi­o­la­tions that we shall call “ra­tio­nal.” When I ar­gue that cer­tain kinds of think­ing are “most ra­tio­nal,” I mean that these help peo­ple achieve their goals. Such ar­gu­ments could be wrong. If so, some other sort of think­ing is most ra­tio­nal.

• It may lead to what I call “folk ra­tio­nal­ity” (do­ing what you think would lead to suc­cess). Bar­ring a few ex­cep­tions (ex­tremes of emo­tion, com­pro­mised men­tal states, etc), most hu­mans are folk ra­tio­nal. How­ever, this folk ra­tio­nal­ity isn’t what I re­fer to when I say “ra­tio­nal”.

How about “do­ing what you can figure out would lead to suc­cess”? The gam­bler could figure out the gam­bler’s fal­lacy, but the per­son cross­ing the road couldn’t figure out the me­te­orite.

In harder prob­lems like New­comb’s Prob­lem or Coun­ter­fac­tual Mug­ging, there are sev­eral lay­ers of “figur­ing out” lead­ing to differ­ent an­swers, and there’s no sub­sti­tute for us­ing in­tel­li­gence to choose be­tween them. So to define what’s ra­tio­nal, we need to define what’s in­tel­li­gent. Peo­ple are work­ing on that, but don’t ex­pect an an­swer soon :-)

• What about the gam­bler who knows not of the gam­bler’s fal­lacy, and be­lieves that be­cause the die hasn’t rol­led an odd num­ber for the past n turns, that it would definitely roll odd this time (af­ter­all, the prob­a­bil­ity of not rol­ling odd n times is 2-n). Are they then ra­tio­nal for bet­ting the ma­jor­ity of their fund on the die rol­ling odd? Let­ting what’s ra­tio­nal de­pend on the knowl­edge of the agent in­volved, leads to a very broad (and pos­si­bly use­less) no­tion of ra­tio­nal­ity. It may lead to what I call “folk ra­tio­nal­ity” (do­ing what you think would lead to suc­cess).

I think it de­pends where the knowl­edge comes from, right?

If he just has an in­stinct that a 6 should come up again, but can’t ex­plain where that in­stinct comes from or defend that be­lief in any kind of ra­tio­nal way other then “it feels right”, then he’s prob­a­bly not be­ing ra­tio­nal.

If he ac­tu­ally did an ex­per­i­ment and rol­led a dice a bunch of times, and just by co­in­ci­dence it ac­tu­ally seemed to come out that when­ever a 6 hadn’t come out for a while it would show up, then it might be a ra­tio­nal be­lief, even though it is in­cor­rect. Granted, if he had bet­ter knowl­edge of statis­ti­cal meth­ods and such he prob­a­bly could have ran the ex­per­i­ment in a bet­ter way, but I think if some­one gath­ers ac­tual data and uses that to ar­rive at an in­cor­rect be­lief and then acts on that be­lief, he’s still be­hav­ing ra­tio­nally. Same thing if you de­vel­oped your be­liefs through other ra­tio­nal meth­ods, like log­i­cal de­duc­tion based on other be­liefs you already had es­tab­lished through ra­tio­nal means, or prob­a­bil­is­tic be­liefs based on some com­bi­na­tion of other things you be­lieve to be true and ob­ser­va­tions, ect.

A ra­tio­nal agent can not ac­tu­ally know ev­ery­thing, all the ra­tio­nal agent can do is act on the best in­for­ma­tion it has. And you can only spend so much in the way of re­sources and time try­ing to perfect that in­for­ma­tion be­fore act­ing on it.

So, I would say ra­tio­nal­ity is defined by:

A- how did you ar­rive at your be­liefs of the state of the world, and

B- did you act in a way that would max­i­mize your chances of “win­ning”,if your be­liefs formed via ra­tio­nal meth­ods are correct

• If he just has an in­stinct that a 6 should come up again, but can’t ex­plain where that in­stinct comes from or defend that be­lief in any kind of ra­tio­nal way other then “it feels right”, then he’s prob­a­bly not be­ing ra­tio­nal.

Maybe in the spe­cific ex­am­ple of ran­dom­ness, but I don’t think you can say the gen­eral case of ‘it feels so’ is in­defen­si­ble. This same mechanism is used for re­ally com­pli­cated black box in­tu­itive rea­son­ing that un­der­pins any trained skill. So in in ar­eas one has a lot of ex­pe­rience in, or ar­eas which are evolu­tion­ary keyed in such as so­cial in­ter­ac­tions or in na­ture this isn’t an ab­surd be­lief.

In fact, know­ing that these black box in­tu­itions ex­ist means they they have to be in­cluded in our in­for­ma­tion about the world, so ‘give high cre­dence to black box when it says some­thing’ may be the best strat­egy if ones abil­ity for an­a­lytic rea­son­ing is in­suffi­cient to de­ter­mine strate­gies with re­sults bet­ter than that.

• Maybe in the spe­cific ex­am­ple of ran­dom­ness, but I don’t think you can say the gen­eral case of ‘it feels so’ is in­defen­si­ble. This same mechanism is used for re­ally com­pli­cated black box in­tu­itive rea­son­ing that un­der­pins any trained skill. So in in ar­eas one has a lot of ex­pe­rience in, or ar­eas which are evolu­tion­ary keyed in such as so­cial in­ter­ac­tions or in na­ture this isn’t an ab­surd be­lief.

Eh. Maybe, but I think that any idea which se­ri­ously un­der­pins your ac­tions and other be­lief sys­tems in an im­por­tant way should be some­thing you can jus­tify in a ra­tio­nal way. It doesn’t mean you always need to think about it in that way, some things be­come “sec­ond na­ture” over time, but you should be able to ex­plain ra­tio­nal un­der­pin­nings if asked.

If you’re talk­ing about a trained skill, “I’ve been fix­ing cars for 20 years and in my ex­pe­rience when you do x you tend to get bet­ter re­sults then when you do y” is a perfectly ra­tio­nal rea­son to have a be­lief. So is “That’s what it said in my med­i­cal school text­book”, ect.

But, in my ex­pe­rience, peo­ple who put too much faith in their “black boxes” and don’t ever think through the ba­sis of their be­liefs tend to be­have in sys­tem­at­i­cally ir­ra­tional ways that prob­a­bly harm them.

• Its funny, I think this is prob­a­bly always true as a guideline (that you should try and jus­tify all your ideas) but might always break down in prac­tice (all your ideas prob­a­bly can’t ever be fully jus­tified, be­cause Agrippa’s trilemma—they’re ei­ther jus­tified in terms of each other, or not jus­tified, and if they are jus­tified in terms of other ideas, they even­tu­ally ei­ther are even­tu­ally cir­cu­larly jus­tified, or con­tinue on into in­finite regress, or are jus­tified by things that are un­jus­tified). We might get some ground by sep­a­rat­ing out ideas from ev­i­dence, and say we ac­cept as ax­io­matic any­thing that is ev­i­denced by in­fer­ence un­til we gain ad­di­tional facts that lend con­text that re­si­tu­ates our model so that it can in­clude pre­vi­ous ob­ser­va­tions… some­thing like that. Or it might be we just have to grand­father in some rules to avoid that Godelian stuff. Thoughts?

• Yes that is a very good point. My cur­rent view is that the rea­son for this is a con­fu­sion be­tween see­ing knowl­edge as based on ra­tio­nal­ity when it is in re­al­ity based on ex­pe­rience. Ra­tion­al­ity is the ma­nipu­la­tion of ba­sic ex­pe­ri­en­tial build­ing blocks and these ‘be­lief’ blocks might cor­re­spond to re­al­ity or not. With the sci­en­tific method this cor­re­spon­dence has been clar­ified to such an ex­tend that it seems as knowl­edge is gen­er­ated purely through ra­tio­nal­ity but that is be­cause we don’t tend to fol­low our as­sump­tions to the limits you are de­scribing in your com­ment. If we check our as­sump­tions and then our as­sump­tions be­hind our as­sump­tions etc. we will reach our fun­da­men­tal pre­sup­po­si­tions.

• Yeah, that’s a good point; one some level, any purely log­i­cal sys­tem always has to start with cer­tain ax­ioms that you can’t prove within that sys­tem, and in the real world that’s prob­a­bly even more true.

I guess, ideally, you would want to be able to at least iden­tify which of your ideas are ax­ioms, and keep an eye on them in some sense to make sure that at least they don’t end up con­flict­ing with other ax­ioms?

• All hu­man­i­mal at­tempts to define ra­tio­nal­ity are ir­ra­tional!