Rationality Compendium: Principle 1 - A rational agent, given its capabilities and the situation it is in, is one that thinks and acts optimally

A perfect ra­tio­nal­ist is an ideal thinker. Ra­tion­al­ity , how­ever, is not the same as perfec­tion. Perfec­tion guaran­tees op­ti­mal out­comes. Ra­tion­al­ity only guaran­tees that the agent will, to the ut­most of their abil­ities, rea­son op­ti­mally. Op­ti­mal rea­son­ing can­not, un­for­tu­nately, guaran­tee op­ti­mal out­comes. This is be­cause most agents are not om­ni­scient or om­nipo­tent. They are in­stead fun­da­men­tally and in­ex­orably limited. To be fair to such agents, the defi­ni­tion of ra­tio­nal­ity that we use should take this into ac­count. There­fore, a ra­tio­nal agent will be defined as: an agent that, given its ca­pa­bil­ities and the situ­a­tion it is in, thinks and acts op­ti­mally. Although it is noted that ra­tio­nal­ity does not guaran­tee the best out­come, a ra­tio­nal agent will most of the time achieve bet­ter out­comes than those of an ir­ra­tional agent.

Ra­tion­al­ity is of­ten con­sid­ered to be split into three parts: nor­ma­tive, de­scrip­tive and pre­scrip­tive ra­tio­nal­ity.

Nor­ma­tive ra­tio­nal­ity de­scribes the laws of thought and ac­tion. That is, how a perfectly ra­tio­nal agent with un­limited com­put­ing power, om­ni­science etc. would rea­son and act. Nor­ma­tive ra­tio­nal­ity ba­si­cally de­scribes what is meant by the phrase “op­ti­mal rea­son­ing”. Of course, for limited agents true op­ti­mal rea­son­ing is im­pos­si­ble and they must in­stead set­tle for bounded op­ti­mal rea­son­ing, which is the clos­est ap­prox­i­ma­tion to op­ti­mal rea­son­ing that is pos­si­ble given the in­for­ma­tion available to the agent and the com­pu­ta­tional abil­ities of the agent. The laws of thought and ac­tion (what we cur­rently be­lieve op­ti­mal rea­son­ing in­volves) are::

  • Logic - math and logic are de­duc­tive sys­tems, where the con­clu­sion of a suc­cess­ful ar­gu­ment fol­lows nec­es­sar­ily from its premises, given the ax­ioms of the sys­tem you’re us­ing: num­ber the­ory, ge­om­e­try, pred­i­cate logic, etc.

  • Prob­a­bil­ity the­ory - is es­sen­tially an ex­ten­sion of logic. Prob­a­bil­ity is a mea­sure of how likely a propo­si­tion is to be true, given ev­ery­thing else that you already be­lieve. Per­haps, the most use­ful rule to be de­rived from the ax­ioms of prob­a­bil­ity the­ory is Bayes’ The­o­rem , which tells you ex­actly how your prob­a­bil­ity for a state­ment should change as you en­counter new in­for­ma­tion. Prob­a­bil­ity is viewed from one of two per­spec­tives: the Bayesian per­spec­tive which sees prob­a­bil­ity as a mea­sure of un­cer­tainty about the world and the Fre­quen­tist per­spec­tive which sees prob­a­bil­ity as the pro­por­tion of times the event would oc­cur in a long run of re­peated ex­per­i­ments. Less wrong fol­lows the Bayesian per­spec­tive.

  • De­ci­sion the­ory - is about choos­ing ac­tions based on the util­ity func­tion of the pos­si­ble out­comes. The util­ity func­tion is a mea­sure of how much you de­sire a par­tic­u­lar out­come. The ex­pected util­ity of an ac­tion is sim­ply the av­er­age util­ity of the ac­tion’s pos­si­ble out­comes weighted by the prob­a­bil­ity that each out­come oc­curs. De­ci­sion the­ory can be di­vided into three parts:

    • Nor­ma­tive de­ci­sion the­ory stud­ies what an ideal agent (a perfect agent, with in­finite com­put­ing power, etc.) would choose.

    • De­scrip­tive de­ci­sion the­ory stud­ies how non-ideal agents (e.g. hu­mans) ac­tu­ally choose.

    • Pre­scrip­tive de­ci­sion the­ory stud­ies how non-ideal agents can im­prove their de­ci­sion-mak­ing (rel­a­tive to the nor­ma­tive model) de­spite their im­perfec­tions.

De­scrip­tive ra­tio­nal­ity de­scribes how peo­ple nor­mally rea­son and act. It is about un­der­stand­ing how and why peo­ple make de­ci­sions. As hu­mans, we have cer­tain limi­ta­tions and adap­ta­tions which quite of­ten makes it im­pos­si­ble for us to be perfectly ra­tio­nal in the nor­ma­tive sense of the word. It is be­cause of this that we must satis­fice or ap­prox­i­mate the nor­ma­tive ra­tio­nal­ity model as best we can. We en­gage in what’s called bounded, ecolog­i­cal or grounded ra­tio­nal­ity . Un­less ex­plic­itly stated oth­er­wise, ‘ra­tio­nal­ity’ in this com­pendium will re­fer to ra­tio­nal­ity in the bounded sense of the word. In this sense, it means that the most ra­tio­nal choice for an agent de­pends on the agents ca­pa­bil­ities and the in­for­ma­tion that is available to it. The most ra­tio­nal choice for an agent is not nec­es­sar­ily the most cer­tain, true or right one. It is just the best one given the in­for­ma­tion and ca­pa­bil­ities that the agent has. This means that an agent that satis­fices or uses heuris­tics may ac­tu­ally be rea­son­ing op­ti­mally, given its limi­ta­tions, even though satis­fic­ing and heuris­tics are short­cuts that are po­ten­tially er­ror prone.

Pre­scrip­tive or ap­plied ra­tio­nal­ity is es­sen­tially about how to bring the think­ing of limited agents closer to what the nor­ma­tive model stipu­lates. It is de­scribed by Baron in Think­ing and De­cid­ing pg.34:

In short, nor­ma­tive mod­els tell us how to eval­u­ate judg­ments and de­ci­sions in terms of their de­par­ture from an ideal stan­dard. De­scrip­tive mod­els spec­ify what peo­ple in a par­tic­u­lar cul­ture ac­tu­ally do and how they de­vi­ate from the nor­ma­tive mod­els. Pre­scrip­tive mod­els are de­signs or in­ven­tions, whose pur­pose is to bring the re­sults of ac­tual think­ing into closer con­for­mity to the nor­ma­tive model. If pre­scrip­tive recom­men­da­tions de­rived in this way are suc­cess­ful, the study of think­ing can help peo­ple to be­come bet­ter thinkers.

The be­havi­ours and thoughts that we con­sider to be ra­tio­nal for limited agents is much larger than those for the perfect, i.e. un­limited, agents. This is be­cause for the limited agents we need to take into ac­count, not only those thoughts and be­havi­ours which are op­ti­mal for the agent, but also those thoughts and be­havi­ours which al­low the limited agent to im­prove their rea­son­ing. It is for this rea­son that we con­sider cu­ri­ousity, for ex­am­ple, to be ra­tio­nal as it of­ten leads to situ­a­tions in which the agents im­prove their in­ter­nal rep­re­sen­ta­tions or mod­els of the world. We also con­sider wise re­source al­lo­ca­tion to be ra­tio­nal be­cause limited agents only have a limited amount of re­sources available to them. There­fore, if they can get a greater re­turn on in­vest­ment on the re­sources that they do use then they will be more likely to be able to get closer to think­ing op­ti­mally in a greater num­ber of do­mains.

We also con­sider the ra­tio­nal­ity of par­tic­uar choices to be some­thing that is in a state of flux. This is be­cause the ra­tio­nal­ity of choices de­pends on the in­for­ma­tion that an agent has ac­cess to and this is some­thing which is fre­quently chang­ing. This hope­fully high­lights an im­por­tant fact. If an agent is sub­op­ti­mal in its abil­ity to gather in­for­ma­tion, then it will of­ten end up with differ­ent in­for­ma­tion than an agent with op­ti­mal in­for­ma­tional gath­er­ing abil­ities would. In short, this is a prob­lem for the sub­op­ti­mal (ir­ra­tional) agent as it means that its ra­tio­nal choices are go­ing to differ more from the perfect nor­ma­tive agents than the ra­tio­nal agents would. The closer an agents ra­tio­nal choices are to the ra­tio­nal choices of a perfect nor­ma­tive agent the more that the agent is ra­tio­nal.

It can also be said that the ra­tio­nal­ity of an agent de­pends in large part on the agents truth seek­ing abil­ities. The more ac­cu­rate and up to date the agents view of the world the closer its ra­tio­nal choices will be to those of the perfect nor­ma­tive agents. It is be­cause of this that a ra­tio­nal agent is one that is in­ex­tri­ca­bly tied to the world as it is. It does not see the world as it wishes it, fears it or has seen it to be, but in­stead con­stantly adapts to and seeks out feed­back from in­ter­ac­tions with the world. The ra­tio­nal agent is at­tuned to the cur­rent state of af­fairs. One other very im­por­tant char­ac­ter­is­tic of ra­tio­nal agents is that they adapt. If the situ­a­tion has changed and the pre­vi­ously ra­tio­nal choice is no longer the one with the great­est ex­pected util­ity, then the ra­tio­nal agent will adapt and change its preferred choice to the one that is now the most ra­tio­nal.

The other im­por­tant part of ra­tio­nal­ity, be­sides truth seek­ing, is that it is about max­imis­ing the abil­ity to ac­tu­ally achieve im­por­tant goals. Th­ese two parts or do­mains of ra­tio­nal­ity: truth seek­ing and goal reach­ing are referred to as epistemic and in­stru­men­tal ra­tio­nal­ity.

  • Epistemic ra­tio­nal­ity is about the abil­ity to form true be­liefs. It is gov­erned by the laws of logic and prob­a­bil­ity the­ory.

  • In­stru­men­tal ra­tio­nal­ity is about the abil­ity to ac­tu­ally achieve the things that mat­ter to you. It is gov­erned by the laws of de­ci­sion the­ory. In a for­mal con­text, it is known as max­i­miz­ing “ex­pected util­ity”. It im­por­tant to note that it is about more than just reach­ing goals. It is also about dis­cov­er­ing how to de­velop op­ti­mal goals.

As you move fur­ther and fur­ther away from ra­tio­nal­ity you in­tro­duce more and more flaws, in­effi­cien­cies and prob­lems into your de­ci­sion mak­ing and in­for­ma­tion gath­er­ing al­gorithms. Th­ese flaws and in­effi­cien­cies are the cause of ir­ra­tional or sub­op­ti­mal be­hav­iors, choices and de­ci­sions. Hu­mans are in­nately ir­ra­tional in a large num­ber of ar­eas which is why, in large part, im­prov­ing our ra­tio­nal­ity is just about miti­gat­ing, as much as pos­si­ble, the in­fluence of our bi­ases and ir­ra­tional propen­si­ties.

If you wish to truly un­der­stand what it means to be ra­tio­nal, then you must also un­der­stand what ra­tio­nal­ity is not. This is im­por­tant be­cause the con­cept of ra­tio­nal­ity is of­ten mis­con­strued by the me­dia. An epi­tomy of this mis­con­strual is the char­ac­ter of Spock from Star Trek. This char­ac­ter does not see ra­tio­nal­ity as if it was about op­ti­mal­ity, but in­stead as if it means that :

  • You can ex­pect ev­ery­one to re­act in a rea­son­able, or what Spock would call ra­tio­nal, way. This is ir­ra­tional be­cause it leads to faulty mod­els and pre­dic­tions of other peo­ples be­hav­iors and thoughts.

  • You should never make a de­ci­sion un­til you have all the in­for­ma­tion. This is ir­ra­tional be­cause hu­mans are not om­ni­scient or om­nipo­tent. Their de­ci­sions are con­strained by many fac­tors like the amount of in­for­ma­tion they have, the cog­ni­tive limi­ta­tions of their brains and the time available for them to make de­ci­sions. This means that a per­son if they are to act ra­tio­nally must of­ten make pre­dic­tions and as­sump­tions.

  • You should never rely on in­tu­ition. This is ir­ra­tional be­cause in­tu­ition (sys­tem 1 think­ing) does have many ad­van­tages over con­scious and effort­ful de­liber­a­tion (sys­tem 2 think­ing) mainly its speed. Although in­tu­itions can be wrong, to dis­re­gard them en­tirely is to hin­der your­self im­mensely. If your in­tu­itions are based on mul­ti­ple in­ter­ac­tions that are similar to the cur­rent situ­a­tion and these in­ter­ac­tions had short feed­back cy­cles, then it is of­ten ir­ra­tional to not rely on your in­tu­itions.

  • You should not be­come emo­tional. This is ir­ra­tional be­cause while it is true that emo­tions can cause you to use less ra­tio­nal ways of think­ing and act­ing, i.e. ways that are op­ti­mised for an­ces­tral or pre­vi­ous en­vi­ron­ments, it does not mean that we should try to erad­i­cate emo­tions in our­selves. This is be­cause emo­tions are es­sen­tial to ra­tio­nal think­ing and nor­mal so­cial be­hav­ior . An as­piring ra­tio­nal­ist should re­mem­ber four points in re­gards to emo­tions:

    • The ra­tio­nal­ity of emo­tions de­pends on the ra­tio­nal­ity of the thoughts and ac­tions that they in­duce. It is ra­tio­nal to feel fear when you are ac­tu­ally in a situ­a­tion where you are threat­ened. It is ir­ra­tional to feel fear in situ­a­tions where are not be­ing threat­ened. If your fear com­pels you to take sub­op­ti­mal ac­tions, then and only then is that fear ir­ra­tional.

    • Emo­tions are the wellspring of value. A large part of in­stru­men­tal ra­tio­nal­ity is about find­ing the best way to achieve your fun­da­men­tal hu­man needs. A per­son who can fulfill these needs through sim­ple meth­ods is more ra­tio­nal than some­one who can’t. In this par­tic­u­lar area peo­ple tend to be­come alot less ra­tio­nal as they age. As adults we should be jeal­ous of the in­no­cent ex­u­ber­ance that comes so nat­u­rally to chil­dren. If we are not as ex­u­ber­ant as chil­dren, then we should won­der at how it is that we have be­come so shack­led by our own self re­straint.

    • Emo­tional con­trol is a virtue, but de­nial is not. Emo­tions can be con­sid­ered a type of in­ter­nal feed­back. A ra­tio­nal per­son does not be con­sciously ig­nore or avoid feed­back as this means that would be limit­ing or dis­tort­ing the in­for­ma­tion that they have ac­cess to. It is pos­si­ble that a ra­tio­nal agent may may need to mask or hide their emo­tions for rea­sons re­lated to so­cietal norms and sta­tus, but they should not re­press emo­tions un­less there is some over­rid­ing ra­tio­nal rea­son to do so. If a per­son vo­li­tion­ally re­presses their emo­tions be­cause they wish to per­pet­u­ally avoid them, then this is both ir­ra­tional and cow­ardly.

    • By ig­nor­ing, avoid­ing and re­press­ing emo­tions you are limit­ing the in­for­ma­tion that you ex­hibit, which means that other peo­ple will not know how you are ac­tu­ally feel­ing. In some situ­a­tions this may be helpful, but it is im­por­tant to re­mem­ber that peo­ple are not mind read­ers. Their abil­ity to model your mind and your emo­tional state de­pends on the in­for­ma­tion that they know about you and the in­for­ma­tion, e.g. body lan­guage, vo­cal in­flec­tions, that you ex­hibit. If peo­ple do not know that you are vuln­er­a­ble, then they can­not know that you are coura­geous. If peo­ple do not know that you are in pain, then they can­not know that you need help.

  • You should only value quan­tifi­able things like money, pro­duc­tivity, or effi­ciency. This is ir­ra­tional be­cause it means that you are re­duc­ing the amount of po­ten­tially valuable in­for­ma­tion that you con­sider. The only rea­son a ra­tio­nal per­son ever re­duces the amount of in­for­ma­tion that they con­sider is be­cause of re­source or time limi­ta­tions.

Re­lated Materials


  • Ra­tion­al­ity—the char­ac­ter­is­tic of think­ing and act­ing op­ti­mally. An agent is ra­tio­nal if it wields its in­tel­li­gence in such a way as to max­i­mize the con­ver­gence be­tween its be­liefs and re­al­ity; and acts on these be­liefs in such a man­ner as to max­i­mize its chances of achiev­ing what­ever goals it has. For hu­mans, this means miti­gat­ing (as much as pos­si­ble) the in­fluence of cog­ni­tive bi­ases.

  • Maths/​Logic—Math and logic are de­duc­tive sys­tems, where the con­clu­sion of a suc­cess­ful ar­gu­ment fol­lows nec­es­sar­ily from its premises, given the ax­ioms of the sys­tem you’re us­ing: num­ber the­ory, ge­om­e­try, pred­i­cate logic, etc.

  • Prob­a­bil­ity the­ory—a field of math­e­mat­ics which stud­ies ran­dom vari­ables and pro­cesses.

  • Bayes the­o­rem—a law of prob­a­bil­ity that de­scribes the proper way to in­cor­po­rate new ev­i­dence into prior prob­a­bil­ities to form an up­dated prob­a­bil­ity es­ti­mate.

  • Bayesian—Bayesian prob­a­bil­ity the­ory is the math of epistemic ra­tio­nal­ity, Bayesian de­ci­sion the­ory is the math of in­stru­men­tal ra­tio­nal­ity.

  • Bayesian prob­a­bil­ity—rep­re­sents a level of cer­tainty re­lat­ing to a po­ten­tial out­come or idea. This is in con­trast to a fre­quen­tist prob­a­bil­ity that rep­re­sents the fre­quency with which a par­tic­u­lar out­come will oc­cur over any num­ber of tri­als. An event with Bayesian prob­a­bil­ity of .6 (or 60%) should be in­ter­preted as stat­ing “With con­fi­dence 60%, this event con­tains the true out­come”, whereas a fre­quen­tist in­ter­pre­ta­tion would view it as stat­ing “Over 100 tri­als, we should ob­serve event X ap­prox­i­mately 60 times.” The differ­ence is more ap­par­ent when dis­cussing ideas. A fre­quen­tist will not as­sign prob­a­bil­ity to an idea; ei­ther it is true or false and it can­not be true 6 times out of 10.

  • Bayesian De­ci­sion the­ory—Bayesian de­ci­sion the­ory refers to a de­ci­sion the­ory which is in­formed by Bayesian prob­a­bil­ity

  • De­ci­sion the­ory – is the study of prin­ci­ples and al­gorithms for mak­ing cor­rect de­ci­sions—that is, de­ci­sions that al­low an agent to achieve bet­ter out­comes with re­spect to its goals.

  • Hol­ly­wood ra­tio­nal­ity- What Spock does, not what ac­tual ra­tio­nal­ists do.


Suggested posts to write:

  • Bounded/​ecolog­i­cal/​grounded Ra­tion­al­ity—I couldn’t find a suit­able re­source for this on less wrong.

Aca­demic Books:

Pop­u­lar Books:


Notes on de­ci­sions I have made while cre­at­ing this post

(these notes will not be in the fi­nal draft):

  • I agree de­no­ta­tion­ally, but ob­ject con­no­ta­tively with ‘ra­tio­nal­ity is sys­tem­ized win­ning’, so I left it out. I feel that it would take too long to get rid of the con­no­ta­tion of com­pe­ti­tion that I be­lieve is as­so­ci­ated with ‘win­ning’. The other point that would need to be delved into is: what ex­actly does the ra­tio­nal­ist win at? I be­lieve by win­ning Elizer meant win­ning at new­comb’s prob­lem, but the idea of win­ning is nor­mally ex­tended into ev­ery­thing. I also be­lieve that I have ba­si­cally cov­ered the idea with: “Ra­tion­al­ity max­i­mizes ex­pected perfor­mance, while perfec­tion max­i­mizes ac­tual perfor­mance.”

  • I left out the 12 virtues of ra­tio­nal­ity be­cause I don’t like perfec­tion­ism. If it was not in the virtues, then I would have in­cluded them. My prob­lem with perfec­tion­ism is that hav­ing it as a goal makes you li­able to pre­ma­ture op­ti­miza­tion and de­vel­op­ing ten­den­cies for sub­op­ti­mal lev­els of adapt­abil­ity. Every­thing I have read in com­plex­ity the­ory, for ex­am­ple, makes me think that perfec­tion­ism is not re­ally a good thing to be aiming for, at least in un­cer­tain and com­plex situ­a­tions. I think truth seek­ing should be viewed as an op­ti­miza­tion pro­cess. If it doesn’t al­low you to be­come more op­ti­mal, then it is not worth it. I have a post about this here.

  • I couldn’t find an ap­pro­pri­ate link for bounded/​ecolog­i­cal/​grounded ra­tio­nal­ity.