# Rationality Compendium: Principle 1 - A rational agent, given its capabilities and the situation it is in, is one that thinks and acts optimally

A perfect ra­tio­nal­ist is an ideal thinker. Ra­tion­al­ity , how­ever, is not the same as perfec­tion. Perfec­tion guaran­tees op­ti­mal out­comes. Ra­tion­al­ity only guaran­tees that the agent will, to the ut­most of their abil­ities, rea­son op­ti­mally. Op­ti­mal rea­son­ing can­not, un­for­tu­nately, guaran­tee op­ti­mal out­comes. This is be­cause most agents are not om­ni­scient or om­nipo­tent. They are in­stead fun­da­men­tally and in­ex­orably limited. To be fair to such agents, the defi­ni­tion of ra­tio­nal­ity that we use should take this into ac­count. There­fore, a ra­tio­nal agent will be defined as: an agent that, given its ca­pa­bil­ities and the situ­a­tion it is in, thinks and acts op­ti­mally. Although it is noted that ra­tio­nal­ity does not guaran­tee the best out­come, a ra­tio­nal agent will most of the time achieve bet­ter out­comes than those of an ir­ra­tional agent.

Ra­tion­al­ity is of­ten con­sid­ered to be split into three parts: nor­ma­tive, de­scrip­tive and pre­scrip­tive ra­tio­nal­ity.

Nor­ma­tive ra­tio­nal­ity de­scribes the laws of thought and ac­tion. That is, how a perfectly ra­tio­nal agent with un­limited com­put­ing power, om­ni­science etc. would rea­son and act. Nor­ma­tive ra­tio­nal­ity ba­si­cally de­scribes what is meant by the phrase “op­ti­mal rea­son­ing”. Of course, for limited agents true op­ti­mal rea­son­ing is im­pos­si­ble and they must in­stead set­tle for bounded op­ti­mal rea­son­ing, which is the clos­est ap­prox­i­ma­tion to op­ti­mal rea­son­ing that is pos­si­ble given the in­for­ma­tion available to the agent and the com­pu­ta­tional abil­ities of the agent. The laws of thought and ac­tion (what we cur­rently be­lieve op­ti­mal rea­son­ing in­volves) are::

• Logic - math and logic are de­duc­tive sys­tems, where the con­clu­sion of a suc­cess­ful ar­gu­ment fol­lows nec­es­sar­ily from its premises, given the ax­ioms of the sys­tem you’re us­ing: num­ber the­ory, ge­om­e­try, pred­i­cate logic, etc.

• Prob­a­bil­ity the­ory - is es­sen­tially an ex­ten­sion of logic. Prob­a­bil­ity is a mea­sure of how likely a propo­si­tion is to be true, given ev­ery­thing else that you already be­lieve. Per­haps, the most use­ful rule to be de­rived from the ax­ioms of prob­a­bil­ity the­ory is Bayes’ The­o­rem , which tells you ex­actly how your prob­a­bil­ity for a state­ment should change as you en­counter new in­for­ma­tion. Prob­a­bil­ity is viewed from one of two per­spec­tives: the Bayesian per­spec­tive which sees prob­a­bil­ity as a mea­sure of un­cer­tainty about the world and the Fre­quen­tist per­spec­tive which sees prob­a­bil­ity as the pro­por­tion of times the event would oc­cur in a long run of re­peated ex­per­i­ments. Less wrong fol­lows the Bayesian per­spec­tive.

• De­ci­sion the­ory - is about choos­ing ac­tions based on the util­ity func­tion of the pos­si­ble out­comes. The util­ity func­tion is a mea­sure of how much you de­sire a par­tic­u­lar out­come. The ex­pected util­ity of an ac­tion is sim­ply the av­er­age util­ity of the ac­tion’s pos­si­ble out­comes weighted by the prob­a­bil­ity that each out­come oc­curs. De­ci­sion the­ory can be di­vided into three parts:

• Nor­ma­tive de­ci­sion the­ory stud­ies what an ideal agent (a perfect agent, with in­finite com­put­ing power, etc.) would choose.

• De­scrip­tive de­ci­sion the­ory stud­ies how non-ideal agents (e.g. hu­mans) ac­tu­ally choose.

• Pre­scrip­tive de­ci­sion the­ory stud­ies how non-ideal agents can im­prove their de­ci­sion-mak­ing (rel­a­tive to the nor­ma­tive model) de­spite their im­perfec­tions.

De­scrip­tive ra­tio­nal­ity de­scribes how peo­ple nor­mally rea­son and act. It is about un­der­stand­ing how and why peo­ple make de­ci­sions. As hu­mans, we have cer­tain limi­ta­tions and adap­ta­tions which quite of­ten makes it im­pos­si­ble for us to be perfectly ra­tio­nal in the nor­ma­tive sense of the word. It is be­cause of this that we must satis­fice or ap­prox­i­mate the nor­ma­tive ra­tio­nal­ity model as best we can. We en­gage in what’s called bounded, ecolog­i­cal or grounded ra­tio­nal­ity . Un­less ex­plic­itly stated oth­er­wise, ‘ra­tio­nal­ity’ in this com­pendium will re­fer to ra­tio­nal­ity in the bounded sense of the word. In this sense, it means that the most ra­tio­nal choice for an agent de­pends on the agents ca­pa­bil­ities and the in­for­ma­tion that is available to it. The most ra­tio­nal choice for an agent is not nec­es­sar­ily the most cer­tain, true or right one. It is just the best one given the in­for­ma­tion and ca­pa­bil­ities that the agent has. This means that an agent that satis­fices or uses heuris­tics may ac­tu­ally be rea­son­ing op­ti­mally, given its limi­ta­tions, even though satis­fic­ing and heuris­tics are short­cuts that are po­ten­tially er­ror prone.

Pre­scrip­tive or ap­plied ra­tio­nal­ity is es­sen­tially about how to bring the think­ing of limited agents closer to what the nor­ma­tive model stipu­lates. It is de­scribed by Baron in Think­ing and De­cid­ing pg.34:

In short, nor­ma­tive mod­els tell us how to eval­u­ate judg­ments and de­ci­sions in terms of their de­par­ture from an ideal stan­dard. De­scrip­tive mod­els spec­ify what peo­ple in a par­tic­u­lar cul­ture ac­tu­ally do and how they de­vi­ate from the nor­ma­tive mod­els. Pre­scrip­tive mod­els are de­signs or in­ven­tions, whose pur­pose is to bring the re­sults of ac­tual think­ing into closer con­for­mity to the nor­ma­tive model. If pre­scrip­tive recom­men­da­tions de­rived in this way are suc­cess­ful, the study of think­ing can help peo­ple to be­come bet­ter thinkers.

The be­havi­ours and thoughts that we con­sider to be ra­tio­nal for limited agents is much larger than those for the perfect, i.e. un­limited, agents. This is be­cause for the limited agents we need to take into ac­count, not only those thoughts and be­havi­ours which are op­ti­mal for the agent, but also those thoughts and be­havi­ours which al­low the limited agent to im­prove their rea­son­ing. It is for this rea­son that we con­sider cu­ri­ousity, for ex­am­ple, to be ra­tio­nal as it of­ten leads to situ­a­tions in which the agents im­prove their in­ter­nal rep­re­sen­ta­tions or mod­els of the world. We also con­sider wise re­source al­lo­ca­tion to be ra­tio­nal be­cause limited agents only have a limited amount of re­sources available to them. There­fore, if they can get a greater re­turn on in­vest­ment on the re­sources that they do use then they will be more likely to be able to get closer to think­ing op­ti­mally in a greater num­ber of do­mains.

We also con­sider the ra­tio­nal­ity of par­tic­uar choices to be some­thing that is in a state of flux. This is be­cause the ra­tio­nal­ity of choices de­pends on the in­for­ma­tion that an agent has ac­cess to and this is some­thing which is fre­quently chang­ing. This hope­fully high­lights an im­por­tant fact. If an agent is sub­op­ti­mal in its abil­ity to gather in­for­ma­tion, then it will of­ten end up with differ­ent in­for­ma­tion than an agent with op­ti­mal in­for­ma­tional gath­er­ing abil­ities would. In short, this is a prob­lem for the sub­op­ti­mal (ir­ra­tional) agent as it means that its ra­tio­nal choices are go­ing to differ more from the perfect nor­ma­tive agents than the ra­tio­nal agents would. The closer an agents ra­tio­nal choices are to the ra­tio­nal choices of a perfect nor­ma­tive agent the more that the agent is ra­tio­nal.

It can also be said that the ra­tio­nal­ity of an agent de­pends in large part on the agents truth seek­ing abil­ities. The more ac­cu­rate and up to date the agents view of the world the closer its ra­tio­nal choices will be to those of the perfect nor­ma­tive agents. It is be­cause of this that a ra­tio­nal agent is one that is in­ex­tri­ca­bly tied to the world as it is. It does not see the world as it wishes it, fears it or has seen it to be, but in­stead con­stantly adapts to and seeks out feed­back from in­ter­ac­tions with the world. The ra­tio­nal agent is at­tuned to the cur­rent state of af­fairs. One other very im­por­tant char­ac­ter­is­tic of ra­tio­nal agents is that they adapt. If the situ­a­tion has changed and the pre­vi­ously ra­tio­nal choice is no longer the one with the great­est ex­pected util­ity, then the ra­tio­nal agent will adapt and change its preferred choice to the one that is now the most ra­tio­nal.

The other im­por­tant part of ra­tio­nal­ity, be­sides truth seek­ing, is that it is about max­imis­ing the abil­ity to ac­tu­ally achieve im­por­tant goals. Th­ese two parts or do­mains of ra­tio­nal­ity: truth seek­ing and goal reach­ing are referred to as epistemic and in­stru­men­tal ra­tio­nal­ity.

• Epistemic ra­tio­nal­ity is about the abil­ity to form true be­liefs. It is gov­erned by the laws of logic and prob­a­bil­ity the­ory.

• In­stru­men­tal ra­tio­nal­ity is about the abil­ity to ac­tu­ally achieve the things that mat­ter to you. It is gov­erned by the laws of de­ci­sion the­ory. In a for­mal con­text, it is known as max­i­miz­ing “ex­pected util­ity”. It im­por­tant to note that it is about more than just reach­ing goals. It is also about dis­cov­er­ing how to de­velop op­ti­mal goals.

As you move fur­ther and fur­ther away from ra­tio­nal­ity you in­tro­duce more and more flaws, in­effi­cien­cies and prob­lems into your de­ci­sion mak­ing and in­for­ma­tion gath­er­ing al­gorithms. Th­ese flaws and in­effi­cien­cies are the cause of ir­ra­tional or sub­op­ti­mal be­hav­iors, choices and de­ci­sions. Hu­mans are in­nately ir­ra­tional in a large num­ber of ar­eas which is why, in large part, im­prov­ing our ra­tio­nal­ity is just about miti­gat­ing, as much as pos­si­ble, the in­fluence of our bi­ases and ir­ra­tional propen­si­ties.

If you wish to truly un­der­stand what it means to be ra­tio­nal, then you must also un­der­stand what ra­tio­nal­ity is not. This is im­por­tant be­cause the con­cept of ra­tio­nal­ity is of­ten mis­con­strued by the me­dia. An epi­tomy of this mis­con­strual is the char­ac­ter of Spock from Star Trek. This char­ac­ter does not see ra­tio­nal­ity as if it was about op­ti­mal­ity, but in­stead as if it means that :

• You can ex­pect ev­ery­one to re­act in a rea­son­able, or what Spock would call ra­tio­nal, way. This is ir­ra­tional be­cause it leads to faulty mod­els and pre­dic­tions of other peo­ples be­hav­iors and thoughts.

• You should never make a de­ci­sion un­til you have all the in­for­ma­tion. This is ir­ra­tional be­cause hu­mans are not om­ni­scient or om­nipo­tent. Their de­ci­sions are con­strained by many fac­tors like the amount of in­for­ma­tion they have, the cog­ni­tive limi­ta­tions of their brains and the time available for them to make de­ci­sions. This means that a per­son if they are to act ra­tio­nally must of­ten make pre­dic­tions and as­sump­tions.

• You should never rely on in­tu­ition. This is ir­ra­tional be­cause in­tu­ition (sys­tem 1 think­ing) does have many ad­van­tages over con­scious and effort­ful de­liber­a­tion (sys­tem 2 think­ing) mainly its speed. Although in­tu­itions can be wrong, to dis­re­gard them en­tirely is to hin­der your­self im­mensely. If your in­tu­itions are based on mul­ti­ple in­ter­ac­tions that are similar to the cur­rent situ­a­tion and these in­ter­ac­tions had short feed­back cy­cles, then it is of­ten ir­ra­tional to not rely on your in­tu­itions.

• You should not be­come emo­tional. This is ir­ra­tional be­cause while it is true that emo­tions can cause you to use less ra­tio­nal ways of think­ing and act­ing, i.e. ways that are op­ti­mised for an­ces­tral or pre­vi­ous en­vi­ron­ments, it does not mean that we should try to erad­i­cate emo­tions in our­selves. This is be­cause emo­tions are es­sen­tial to ra­tio­nal think­ing and nor­mal so­cial be­hav­ior . An as­piring ra­tio­nal­ist should re­mem­ber four points in re­gards to emo­tions:

• The ra­tio­nal­ity of emo­tions de­pends on the ra­tio­nal­ity of the thoughts and ac­tions that they in­duce. It is ra­tio­nal to feel fear when you are ac­tu­ally in a situ­a­tion where you are threat­ened. It is ir­ra­tional to feel fear in situ­a­tions where are not be­ing threat­ened. If your fear com­pels you to take sub­op­ti­mal ac­tions, then and only then is that fear ir­ra­tional.

• Emo­tions are the wellspring of value. A large part of in­stru­men­tal ra­tio­nal­ity is about find­ing the best way to achieve your fun­da­men­tal hu­man needs. A per­son who can fulfill these needs through sim­ple meth­ods is more ra­tio­nal than some­one who can’t. In this par­tic­u­lar area peo­ple tend to be­come alot less ra­tio­nal as they age. As adults we should be jeal­ous of the in­no­cent ex­u­ber­ance that comes so nat­u­rally to chil­dren. If we are not as ex­u­ber­ant as chil­dren, then we should won­der at how it is that we have be­come so shack­led by our own self re­straint.

• Emo­tional con­trol is a virtue, but de­nial is not. Emo­tions can be con­sid­ered a type of in­ter­nal feed­back. A ra­tio­nal per­son does not be con­sciously ig­nore or avoid feed­back as this means that would be limit­ing or dis­tort­ing the in­for­ma­tion that they have ac­cess to. It is pos­si­ble that a ra­tio­nal agent may may need to mask or hide their emo­tions for rea­sons re­lated to so­cietal norms and sta­tus, but they should not re­press emo­tions un­less there is some over­rid­ing ra­tio­nal rea­son to do so. If a per­son vo­li­tion­ally re­presses their emo­tions be­cause they wish to per­pet­u­ally avoid them, then this is both ir­ra­tional and cow­ardly.

• By ig­nor­ing, avoid­ing and re­press­ing emo­tions you are limit­ing the in­for­ma­tion that you ex­hibit, which means that other peo­ple will not know how you are ac­tu­ally feel­ing. In some situ­a­tions this may be helpful, but it is im­por­tant to re­mem­ber that peo­ple are not mind read­ers. Their abil­ity to model your mind and your emo­tional state de­pends on the in­for­ma­tion that they know about you and the in­for­ma­tion, e.g. body lan­guage, vo­cal in­flec­tions, that you ex­hibit. If peo­ple do not know that you are vuln­er­a­ble, then they can­not know that you are coura­geous. If peo­ple do not know that you are in pain, then they can­not know that you need help.

• You should only value quan­tifi­able things like money, pro­duc­tivity, or effi­ciency. This is ir­ra­tional be­cause it means that you are re­duc­ing the amount of po­ten­tially valuable in­for­ma­tion that you con­sider. The only rea­son a ra­tio­nal per­son ever re­duces the amount of in­for­ma­tion that they con­sider is be­cause of re­source or time limi­ta­tions.

Re­lated Materials

Wikis:

• Ra­tion­al­ity—the char­ac­ter­is­tic of think­ing and act­ing op­ti­mally. An agent is ra­tio­nal if it wields its in­tel­li­gence in such a way as to max­i­mize the con­ver­gence be­tween its be­liefs and re­al­ity; and acts on these be­liefs in such a man­ner as to max­i­mize its chances of achiev­ing what­ever goals it has. For hu­mans, this means miti­gat­ing (as much as pos­si­ble) the in­fluence of cog­ni­tive bi­ases.

• Maths/​Logic—Math and logic are de­duc­tive sys­tems, where the con­clu­sion of a suc­cess­ful ar­gu­ment fol­lows nec­es­sar­ily from its premises, given the ax­ioms of the sys­tem you’re us­ing: num­ber the­ory, ge­om­e­try, pred­i­cate logic, etc.

• Prob­a­bil­ity the­ory—a field of math­e­mat­ics which stud­ies ran­dom vari­ables and pro­cesses.

• Bayes the­o­rem—a law of prob­a­bil­ity that de­scribes the proper way to in­cor­po­rate new ev­i­dence into prior prob­a­bil­ities to form an up­dated prob­a­bil­ity es­ti­mate.

• Bayesian—Bayesian prob­a­bil­ity the­ory is the math of epistemic ra­tio­nal­ity, Bayesian de­ci­sion the­ory is the math of in­stru­men­tal ra­tio­nal­ity.

• Bayesian prob­a­bil­ity—rep­re­sents a level of cer­tainty re­lat­ing to a po­ten­tial out­come or idea. This is in con­trast to a fre­quen­tist prob­a­bil­ity that rep­re­sents the fre­quency with which a par­tic­u­lar out­come will oc­cur over any num­ber of tri­als. An event with Bayesian prob­a­bil­ity of .6 (or 60%) should be in­ter­preted as stat­ing “With con­fi­dence 60%, this event con­tains the true out­come”, whereas a fre­quen­tist in­ter­pre­ta­tion would view it as stat­ing “Over 100 tri­als, we should ob­serve event X ap­prox­i­mately 60 times.” The differ­ence is more ap­par­ent when dis­cussing ideas. A fre­quen­tist will not as­sign prob­a­bil­ity to an idea; ei­ther it is true or false and it can­not be true 6 times out of 10.

• Bayesian De­ci­sion the­ory—Bayesian de­ci­sion the­ory refers to a de­ci­sion the­ory which is in­formed by Bayesian prob­a­bil­ity

• De­ci­sion the­ory – is the study of prin­ci­ples and al­gorithms for mak­ing cor­rect de­ci­sions—that is, de­ci­sions that al­low an agent to achieve bet­ter out­comes with re­spect to its goals.

• Hol­ly­wood ra­tio­nal­ity- What Spock does, not what ac­tual ra­tio­nal­ists do.

Posts:

Suggested posts to write:

• Bounded/​ecolog­i­cal/​grounded Ra­tion­al­ity—I couldn’t find a suit­able re­source for this on less wrong.

Aca­demic Books:

Pop­u­lar Books:

Talks:

Notes on de­ci­sions I have made while cre­at­ing this post

(these notes will not be in the fi­nal draft):

• I agree de­no­ta­tion­ally, but ob­ject con­no­ta­tively with ‘ra­tio­nal­ity is sys­tem­ized win­ning’, so I left it out. I feel that it would take too long to get rid of the con­no­ta­tion of com­pe­ti­tion that I be­lieve is as­so­ci­ated with ‘win­ning’. The other point that would need to be delved into is: what ex­actly does the ra­tio­nal­ist win at? I be­lieve by win­ning Elizer meant win­ning at new­comb’s prob­lem, but the idea of win­ning is nor­mally ex­tended into ev­ery­thing. I also be­lieve that I have ba­si­cally cov­ered the idea with: “Ra­tion­al­ity max­i­mizes ex­pected perfor­mance, while perfec­tion max­i­mizes ac­tual perfor­mance.”

• I left out the 12 virtues of ra­tio­nal­ity be­cause I don’t like perfec­tion­ism. If it was not in the virtues, then I would have in­cluded them. My prob­lem with perfec­tion­ism is that hav­ing it as a goal makes you li­able to pre­ma­ture op­ti­miza­tion and de­vel­op­ing ten­den­cies for sub­op­ti­mal lev­els of adapt­abil­ity. Every­thing I have read in com­plex­ity the­ory, for ex­am­ple, makes me think that perfec­tion­ism is not re­ally a good thing to be aiming for, at least in un­cer­tain and com­plex situ­a­tions. I think truth seek­ing should be viewed as an op­ti­miza­tion pro­cess. If it doesn’t al­low you to be­come more op­ti­mal, then it is not worth it. I have a post about this here.

• I couldn’t find an ap­pro­pri­ate link for bounded/​ecolog­i­cal/​grounded ra­tio­nal­ity.

• Ra­tion­al­ity max­i­mizes ex­pected performance

Hm. Since this is a core defi­ni­tion, I have an urge to ex­am­ine it very care­fully. First, “perfor­mance” is a bit fuzzy, would you mind if I re­placed it with util­ity? We would get “ra­tio­nal­ity max­i­mizes ex­pected util­ity”. I think that I have a few ques­tions about that.

• Ra­tion­al­ity max­i­mizes. That im­plies that ev­ery ra­tio­nal ac­tion must max­i­mize util­ity. Any­thing that does not max­i­mize util­ity is not (fully) ra­tio­nal. In par­tic­u­lar, satis­fic­ing is not ra­tio­nal.

• Ra­tion­al­ity max­i­mizes ex­pected util­ity. A great deal of heavy lift­ing is done by this word and there are some traps here. For ex­am­ple, if you define util­ity as “that what you want” and add a lit­tle bit about re­vealed prefer­ences, we would get caught in a loop: you max­i­mize what you want and how do we know what you want? why, that is what you max­i­mize. In gen­eral most ev­ery ac­tion max­i­mizes some util­ity and, more­over, there is no re­quire­ment for the util­ity func­tion to be sta­ble across time, so this gets com­pli­cated quite fast.

• Ra­tion­al­ity max­i­mizes ex­pected util­ity. At is­sue here are risk con­sid­er­a­tions. You can wave them away by say­ing that one should max­i­mize risk-ad­justed util­ity, but in prac­tice this is a pretty big blind spot. Faced with es­ti­mated dis­tri­bu­tions of fu­ture util­ity, most peo­ple would pick one with the high­est mean (they pick the max­i­mum ex­pected value), but that ig­nores the width of the dis­tri­bu­tions which is rarely a good idea.

Take cu­ri­os­ity. It’s an ac­cepted ra­tio­nal­ist virtue. And yet I don’t see how it max­i­mizes ex­pected util­ity.

• I’m not sure if this is cor­rect, but my best guess is:

It max­i­mizes util­ity, in so far as most goals are bet­ter achieved with more in­for­ma­tion, and peo­ple tend to sys­tem­at­i­cally un­der­es­ti­mate the value of col­lect­ing more in­for­ma­tion or suffer from bi­ases that pre­vent them from ac­quiring this in­for­ma­tion. Or, in other words, cu­ri­os­ity is vir­tu­ous be­cause hu­mans are bounded and flawed agents, and it helps rec­tify the bi­ases that we fall prey to. Just like be­ing quick to up­date on ev­i­dence is a virtue, and schol­ar­ship is a virtue.

• There are a cou­ple of prob­lems here. First is the usual thing for­got­ten on LW—costs. “More in­for­ma­tion” is worth­while iff its benefits out­weigh the costs of ac­quiring it. Se­cond, your ar­gu­ment im­plies that, say, at­tempt­ing to read the en­tire Wikipe­dia (or En­cy­clo­pe­dia Bri­tan­nica if you are wor­ried about sta­bil­ity) from start to finish would be a ra­tio­nal thing to do. Would it?

• No, it isn’t. Be­ing cu­ri­ous is a good heuris­tic for most peo­ple, be­cause most peo­ple are in the re­gion where in­for­ma­tion gath­er­ing is cheaper than the ex­pected value of gath­er­ing in­for­ma­tion. I don’t think we dis­agree on any­thing con­crete: I don’t claim that it’s ra­tio­nal in it­self a pri­ori but is a fairly good heuris­tic.

• This is a good point about tak­ing into ac­count the costs. I want to cover this idea in my third post which I am still writ­ing, but will prob­a­bly be some­thing like Prin­ci­ple 3 – your ra­tio­nal­ity de­pends on the use­ful­ness of your in­ter­nal rep­re­sen­ta­tion of the world. My view is that truth seek­ing should be viewed as an op­ti­miza­tion pro­cess. If it doesn’t al­low you to be­come more op­ti­mal, then it is not worth it. I have a post about this here.

• The quote prob­a­bly should have prob­a­bly had an ‘of­ten’ in it. I wasn’t ac­tu­ally try­ing to define ra­tio­nal­ity in that quote. I was just try­ing to differ­en­ti­ate it from perfec­tion. I have rewrit­ten the first para­graph based on your feed­back.

• I have rewrit­ten the first para­graph based on your feed­back.

Mea culpa, but the rewrite doesn’t look great to me. Be­fore, you first para­graph had some zing. Peo­ple like me could and did find fault with it, sure, but at least it was en­er­getic. And now the first two sen­tences are fol­lowed by a lot of hem­ming and haw­ing which sounds defen­sive and is en­tirely un­in­spiring.

“En­sur­ing that re­source us­age and be­havi­our/​thought cor­di­na­tion is di­rected to­wards the fulfill­ment of the agents goals” was already be­ing taught by se­nior slave-drivers to ju­nior slave-drivers when the pyra­mids were be­ing built. In try­ing to avoid ra­tio­nal­ity be just pre­dic­tion, you made it be just effec­tive­ness.

I don’t have a good sug­ges­tion for you, in fact I’m not sure that the so-called epistemic ra­tio­nal­ity (aka sci­ence) and in­stru­men­tal ra­tio­nal­ity (aka prag­ma­tism and keep­ing your eye on the ball) can be use­fully joined to­gether into a sin­gle con­cept. But since you are writ­ing a com­pendium, you prob­a­bly should come up with a rea­son­able defi­ni­tion for ra­tio­nal­ity, since it is, y’know, a core con­cept.

• I changed it again.

In terms of the defin­tiion, it is in the ti­tle. As to what is is. I am ba­si­cally try­ing to con­vey the idea that ra­tio­nal­ity is op­ti­mal think­ing. Although, I sup­pose I am also happy with how its defined in this book. If you think the be­low defi­ni­tions are bet­ter, let me know.

• Ra­tion­al­ity: the prop­erty of a sys­tem which does the “right thing” given what it knows.

• It is an gent that acts to max­i­mize its ex­pected perfor­mance mea­sure. That is, it does the “right thing”.

• For each pos­si­ble per­cept se­quence, an ideal ra­tio­nal agent should do what­ever ac­tion is ex­pected to max­i­mize its perfor­mance mea­sure, on the ba­sis of the ev­i­dence pro­vided by the per­cept se­quence and what­ever built-in knowl­edge the agent has

• An om­ni­scient agent knows the ac­tual out­come of its ac­tions, and can act ac­cord­ingly, but om­ni­science is im­pos­si­ble in re­al­ity. It is im­por­tant to dis­t­in­guish ra­tio­nal­ity and om­ni­science. This is be­cause we should not blame an agent for failing to take into ac­count some­thing it could not per­ceive, or for failing to take an ac­tion that it is in­ca­pable of tak­ing.

• In terms of the defin­tiion, it is in the ti­tle.

That’s not re­ally a defi­ni­tion: you just shifted the en­tire bur­den onto the word “op­ti­mally”. A ba­sic use of a defi­ni­tion is to see if some­thing fits it—if we defined a class A, is ob­ject z a mem­ber of that class? So let’s say I’m con­sid­er­ing some ac­tion. Is it ra­tio­nal? Well, it is if it’s op­ti­mal. Err.. and what does that mean? To an­swer I need to define op­ti­mal­ity and that is not triv­ial. And if you say that op­ti­mal­ity is max­i­miz­ing (ex­pected) util­ity, we’re back to your origi­nal defi­ni­tion which I poked at and you aban­doned.

the prop­erty of a sys­tem which does the “right thing”

That’s ex­actly the same thing—re­plac­ing one word with an­other (or two) with­out clar­ify­ing any­thing.

what­ever ac­tion is ex­pected to max­i­mize its perfor­mance measure

That’s max­i­miz­ing ex­pected util­ity, again.

we should not blame an agent

• Be­fore we get too deeply into prob­lems with what­ever defi­ni­tion I might use, I want to make sure that you agree that what I am try­ing to say is right. Once that is con­firmed, then I can think more about how to say it well.

This is ba­si­cally what I am try­ing to say in the post. ‘Ra­tion­al­ity’ is ‘op­ti­mal rea­son­ing’ which we know of as nor­ma­tive ra­tio­nal­ity, i.e. the laws of thought and ac­tion or what the perfect agent would do. A caveat is that when we talk about ra­tio­nal­ity in re­gards to limited agents we are re­ally talk­ing about bounded ra­tio­nal­ity. Hence, ra­tio­nal­ity in this case is re­ally ‘bounded op­ti­mal rea­son­ing’. So for limited agents, ra­tio­nal­ity is about the rea­son­ing that best ap­prox­i­mates the re­sults of the nor­ma­tive ra­tio­nal­ity. Also, for limited agents we con­sider those types of think­ing that lead to more op­ti­mal rea­son­ing in the fu­ture to be ra­tio­nal as well. This is the ba­sis for epistemic and in­stru­men­tal ra­tio­nal­ity and is why cu­ri­os­ity is ra­tio­nal. Cu­ri­os­ity of­ten leads to bet­ter maps. Bet­ter maps of­ten leads to bet­ter de­ci­sions and, there­fore, we con­sider cu­ri­os­ity to be ra­tio­nal. In terms of costs, the goal of ra­tio­nal­ity in limited agents is to best ap­prox­i­mate the nor­ma­tive ra­tio­nal­ity. This re­quires the high­est re­turn on in­vest­ment on the re­sources that you make use of. Ba­si­cally the re­source us­age to ex­pected utiltity ra­tio should be high since, we only have a limited amount of re­sources that we can make us of. If there is an al­ter­na­tive way to spend the same re­sources which has a higher us­age to ex­pected util­ity ra­tio, then you are not as close as pos­si­ble.to the nor­ma­tive ra­tio­nal­ity.

• Be­fore we get too deeply into prob­lems with what­ever defi­ni­tion I might use, I want to make sure that you agree that what I am try­ing to say is right.

A good idea. I’m not nipick­ing about word­ing, at is­sue is ac­tu­ally mean­ing.

First, as I men­tioned be­fore, I am not sure how to com­bine epistemic and in­stru­men­tal ra­tio­nal­ity to­gether into one use­ful con­cept. I am not say­ing it’s im­pos­si­ble, just that noth­ing comes to my mind.

One is­sue, for ex­am­ple, is that they be­long to differ­ent cat­e­gories: one is about know­ing and the other is about do­ing. Yes, you can triv­ially stick them to­gether by say­ing that epistemic ra­tio­nal­ity is just in­stru­men­tal ra­tio­nal­ity with the goal of con­struct­ing a good map, but I don’t know what you gain by that. Con­struct­ing a good map is, ba­si­cally, the sci­en­tific method and it is not a de­ci­sion the­ory.

Se­cond, I have prob­lems with the “what the perfect agent would do”. An im­me­di­ate is­sue is that the an­swer to that is “You don’t know and you will never know” for any no­tice­ably com­plex prob­lem[1], es­pe­cially one that con­cerns the messy real world and not, for ex­am­ple, the neat and well-defined world of math­e­mat­ics. That’s an is­sue be­cause you set it up as a stan­dard and as a limit to which “bounded op­ti­mal rea­son­ing” should con­verge. But if you don’t know what it is, you don’t know what you should con­verge to and don’t have a good method to ad­ju­di­cate com­pet­ing claims about what is ra­tio­nal.

There are also ques­tions about defin­ing ra­tio­nal­ity as op­ti­mal­ity. Op­ti­mal­ity typ­i­cally in­volves max­i­miz­ing some mea­sure, but in a lot of situ­a­tions what mat­ters is not how to reach the max­i­mum, but rather what is it that you op­ti­mize. Is it “ra­tio­nal” to ar­rive at an op­ti­mum for the wrong thing? How do you know what to op­ti­mize for? Hand­wav­ing about util­ity re­mains hand­wav­ing be­cause the only util­ity func­tions I have seen which ac­tu­ally pro­duce a spe­cific nu­mer­i­cal es­ti­mate are eco­nomic util­ity func­tions and they solely care about money.

More­over, you rarely have the lux­ury of op­ti­miz­ing for one thing. Typ­i­cally you have mul­ti­ple con­flict­ing goals with a mix of differ­ent costs to all ac­tions, so de­cid­ing how are you go­ing to bal­ance goals and sum­ma­rize costs is very im­por­tant and I have no idea what is the “ra­tio­nal” way to go about it.

All in all, I am not satis­fied by the “perfect agent” or “op­ti­mize util­ity” defi­ni­tions of ra­tio­nal­ity. The perfect agent ap­proach is es­sen­tially WWJD—What Would Je­sus Do—only with­out the re­li­gious bag­gage, and op­ti­miz­ing util­ity doesn’t tell me how to ac­tu­ally, in prac­tice do that.

No­tice that keep­ing epistemic and in­stru­men­tal ra­tio­nal­ity sep­a­rate works much bet­ter. The crite­rion for epistemic ra­tio­nal­ity is the match be­tween the map and the re­al­ity—this is spe­cific and ob­serv­able. The crite­rion for in­stru­men­tal ra­tio­nal­ity is whether you reach your goals at a rea­son­able cost. This is more com­pli­cated be­cause of un­cer­tainty of the fu­ture: good de­ci­sions don’t always lead to good out­comes and good out­comes do not nec­es­sar­ily fol­low from good de­ci­sions. But even here there are things we can look at and han­dles we can grab and ma­nipu­late. But “em­u­late perfec­tion” or “max­i­mize util­ity”—I have no idea how to even start do­ing that.

[1] This is com­monly held as en­tirely ob­vi­ous in LW—only in the con­text of AI box­ing :-)

• First, as I men­tioned be­fore, I am not sure how to com­bine epistemic and in­stru­men­tal ra­tio­nal­ity to­gether into one use­ful con­cept. I am not say­ing it’s im­pos­si­ble, just that noth­ing comes to my mind. One is­sue, for ex­am­ple, is that they be­long to differ­ent cat­e­gories: one is about know­ing and the other is about do­ing.

If we for­get about epistemic and in­stru­men­tal ra­tio­nal­ity for a mo­ment and think about what rea­son­ing is aimed at achiev­ing. That is, why we care about it. Then, I think we can get closer to un­der­stand­ing how epistemic and in­stru­men­tal ra­tio­nal­ity might work to­gether to be­come part of some­thing larger. They are ,of course, still differ­ent tech­niques.

Do you think that the be­low ar­eas de­scribe what it means to rea­son well?

• It is about ac­tu­ally achiev­ing de­sired out­comes, so if you want x to oc­cur it is about mak­ing sure that you ini­ti­ate a se­ries of thoughts and ac­tions that lead to x oc­cur­ring in­stead of y or z or a plethora of other pos­si­bil­ities.

• It is about achiev­ing out­comes at a rea­son­able cost. If you can achieve your de­sired out­comes with cheaper costs, then you will be able to achieve more out­comes.

• It is about choos­ing the best out­comes to achieve. We all have limited re­sources and to achieve any out­comes we need to use re­sources. This means that we can only chose a limited amount of out­comes to pur­sue. Mak­ing this choice wisely is what this area is about.

• It is about valu­ing the out­comes ap­pro­pri­ately. This would be about mak­ing your val­ues co­her­ent and cor­rectly valu­ing the things that mat­ter so that they get pri­or­ity.

Epistemic/​In­stru­men­tal ra­tio­nal­ity is re­ally about cer­tain types of skills that al­low us to do well in the above ar­eas (I think I cov­ered all of them). I think that there might also be more types of skills. I have a gen­eral idea about what an­other one might be, I am not sure how many oth­ers there might be. Although, I want the com­pendium to cover ex­ist­ing and es­tab­lished ideas only. That is why I am refer­ring to epistemic and in­stru­men­tal ra­tio­nal­ity and not any­thing else.

All in all, I am not satis­fied by the “perfect agent” or “op­ti­mize util­ity” defi­ni­tions of ra­tio­nal­ity. The perfect agent ap­proach is es­sen­tially WWJD—What Would Je­sus Do—only with­out the re­li­gious bag­gage, and op­ti­miz­ing util­ity doesn’t tell me how to ac­tu­ally, in prac­tice do that.

What do you think about defin­ing ra­tio­nal­ity as a prop­erty that is at­tributed by agents to cer­tain thoughts and be­havi­ours? This would mean that it is not only bounded by the agent’s abil­ities and the in­for­ma­tion it has, but also by its un­der­stand­ing of what it means to be ra­tio­nal. Essen­tially, it would mean that ‘ra­tio­nal­ity’ is sub­jec­tive. To avoid the fal­lacy of the grey there needs to be some ob­jec­tive way to judge differ­ent agents un­der­stand­ing of what it means to be ra­tio­nal. This ob­jec­tive way is ba­si­cally our best over­all guess at what perfectly op­ti­mal rea­son­ing or op­ti­mal rea­son­ing for hu­mans would be. For hu­mans, this way is the sci­en­tific method with the cur­rent body of work point­ing to logic, prob­a­bil­ity and de­ci­sion the­ory as hav­ing the clos­est an­swers on what it means to rea­son op­ti­mally, i.e. be ra­tio­nal. Th­ese an­swers aren’t nec­es­sar­ily cor­rect due to nega­tive prag­ma­tism etc. They are just our cur­rent, best and most in­formed guesses.

There are also ques­tions about defin­ing ra­tio­nal­ity as op­ti­mal­ity. Op­ti­mal­ity typ­i­cally in­volves max­i­miz­ing some mea­sure, but in a lot of situ­a­tions what mat­ters is not how to reach the max­i­mum, but rather what is it that you op­ti­mize. Is it “ra­tio­nal” to ar­rive at an op­ti­mum for the wrong thing?

I always thought that this was a part of what it means to be in­stru­men­tally ra­tio­nal. Ba­si­cally to have op­ti­mal goals as well. This is my prob­lem with in­stru­men­tal ra­tio­nal­ity as it’s talked about on less wrong is about achiev­ing what you value at a rea­son­able cost or is it about mak­ing your val­ues co­her­ent and in line with what you in­nately value or is it a com­bi­na­tion of the two. I have always felt that in­stru­men­tal ra­tio­nal­ity is a bit too over­reach­ing and en­com­pass­ing. Do you think I should split it into two types of in­stru­men­tal ra­tio­nal­ity? One for costs and one for value al­ign­ment or am I not in­ter­pret­ing it cor­rectly.

No­tice that keep­ing epistemic and in­stru­men­tal ra­tio­nal­ity sep­a­rate works much bet­ter.

I will have other posts where I go into de­tail on each of these sep­a­rately. I think that they are sep­a­rate skills or ar­eas of ex­per­tise, but I also think that there should be a base rea­son for why we should care about them.

• and think about what rea­son­ing is aimed at achieving

So do you want to define “ra­tio­nal­ity” as a kind of rea­son­ing? Rea­son­ing is an opaque men­tal pro­cess and, for ex­am­ple, does not in­clude act­ing which is a large part of in­stru­men­tal ra­tio­nal­ity. Pro­cras­ti­na­tion is a clas­sic LW sin, but it’s not a rea­son­ing prob­lem. And what would be non-ra­tio­nal rea­son­ing be­sides straight­for­ward log­i­cal er­rors? The great ma­jor­ity of think­ing peo­ple do through­out the day is not for­mal­iz­able into a neat sys­tem of propo­si­tions and con­clu­sions.

It is about ac­tu­ally achiev­ing de­sired out­comes …at a rea­son­able cost

Yes, that’s the defi­ni­tion of in­stru­men­tal ra­tio­nal­ity.

It is about choos­ing the best out­comes to achieve. … It is about valu­ing the out­comes ap­pro­pri­ately.

Hold on, that’s new. Are you claiming that (proper) val­ues are a part of ra­tio­nal­ity and that ra­tio­nal­ity will tell you what your val­ues should be? I think I am go­ing to loudly ob­ject to that. Maybe you can provide an ex­am­ple to show what you mean?

is re­ally about cer­tain types of skills that al­low us to do well

Hm, that’s an in­ter­est­ing ap­proach. Then you’d con­sider ra­tio­nal­ity a kind of skill—a skill like writ­ing es­says or pro­gram­ming? This is prob­a­bly worth ex­plor­ing fur­ther.

Essen­tially, it would mean that ‘ra­tio­nal­ity’ is sub­jec­tive.

Not sure I want to go that way. You wouldn’t have many coun­ter­ar­gu­ments to a bloke which de­clares him­self perfectly ra­tio­nal as he goes to pray to Je­sus so that he wins the lot­tery. And once you in­tro­duce an “ob­jec­tive way to judge” there doesn’t seem to be any point to the sub­jec­tivity any more.

I always thought that this was a part of what it means to be in­stru­men­tally ra­tio­nal. Ba­si­cally to have op­ti­mal goals as well.

See above—goals are a di­rect func­tion of val­ues and I have very strong doubts that ra­tio­nal­ity can tell you what your val­ues should be.

Hu­mans don’t have co­her­ent val­ues. In fact, I don’t think you can make sys­tem of val­ues com­plex enough to deal with real life fully co­her­ent (peo­ple who come close to that are usu­ally called “crazy fa­nat­ics”). In­stead, what peo­ple do is trade off differ­ent val­ues against each other and come up with an end-re­sult bal­ance where they are will­ing to sac­ri­fice some A, B, and C but gain X, Y, and Z. As a crude ap­prox­i­ma­tion you can think about it as sum­ming differ­ent vec­tors and act­ing ac­cord­ing to where the summed vec­tor points.

I think that to what de­gree ra­tio­nal­ity ap­plies here is a hard ques­tion. On the one hand, there is no ba­sis for ra­tio­nal­ity to say “you need to value this and not value that”. On the other hand, val­ues and their weights are not sta­ble across time, and part of ra­tio­nal­ity is jug­gling short-term and long-term de­sires and con­se­quences—usu­ally point­ing out that it’s not smart to pay with a lot of long-term pain for a jolt of short-term plea­sure. That’s where this whole bit about “imag­ine your­self as a very smart, calm, ca­pa­ble hu­man be­ing—what would she choose?” comes in.

So, yes, it’s com­pli­cated. I have is­sues with listen­ing to “It’s not ra­tio­nal to value/​de­sire this”, but I have much less is­sues with “The price for this ac­tion that you want to do is re­ally high, are you quite sure you want to pay it, that doesn’t look ra­tio­nal”. I am not sure where the proper bound­ary is.

• So do you want to define “ra­tio­nal­ity” as a kind of rea­son­ing? Rea­son­ing is an opaque men­tal pro­cess and, for ex­am­ple, does not in­clude act­ing which is a large part of in­stru­men­tal ra­tio­nal­ity.

When I use the word rea­son­ing, I re­ally mean both the sys­tem 1 and 2 cog­ni­tive pro­cesses. By ra­tio­nal I ba­si­cally mean rea­son­ing (sys­tem 1 and 2) done well. Where done well, is defined based on your most trusted source. For us this is sci­ence, so logic, prob­a­bil­ity, de­ci­sion the­ory etc. for sys­tem 2.

Hold on, that’s new. Are you claiming that (proper) val­ues are a part of ra­tio­nal­ity and that ra­tio­nal­ity will tell you what your val­ues should be? I think I am go­ing to loudly ob­ject to that. Maybe you can provide an ex­am­ple to show what you mean?

I don’t know what “proper” would mean. I am talk­ing about co­her­ence which means that its “proper­ness”, I sup­pose, de­pends on its con­text, i.e. the other pre-ex­ist­ing val­ues. I will give you some ex­am­ples. I will as­sume that you already know the differ­ence be­tween want­ing and lik­ing.

• Ex­ces­sive Want­ing—an ex­am­ple is drug ad­dic­tion: “Only ‘want­ing’ sys­tems sen­si­tize, and so ‘want­ing’ can in­crease and be­come quite in­tense due to sen­si­ti­za­tion, re­gard­less of whether a drug still re­mains ‘liked’ af­ter many re­peated uses”.

• Not lik­ing things that you should or could—ex­am­ples are bad ex­pe­riences that cause aver­sion con­di­tion­ing to some­thing that you used to or could like. My gen­eral view is that if you don’t like some­thing and you could then this is a limi­ta­tion.

• Not want­ing things you like—ugh fields are an ex­am­ple of this.

• Con­flict­ing wants—this is of­ten in­evitable like you say value is com­plex. But, I think it is im­por­tant to look at what the fun­da­men­tal hu­man val­ues or needs are and try to al­ign with those. If you don’t, then in gen­eral there is a go­ing to be a greater amount of con­flict.

I would need to write a full post on the de­tails, but that is just a gen­eral idea of what I mean. You also con­sider the val­ues of oth­ers that you are in­ter­con­nected with and care about.

Hm, that’s an in­ter­est­ing ap­proach. Then you’d con­sider ra­tio­nal­ity a kind of skill—a skill like writ­ing es­says or pro­gram­ming? This is prob­a­bly worth ex­plor­ing fur­ther.

I don’t see how you can view it as any­thing but a skill. This is be­cause epistemic ra­tio­nal­ity, for ex­am­ple, is only valuable in­stru­men­tally. It helps makes more ra­tio­nal de­ci­sions, but the truer be­liefs it causes need to be ap­plied to ac­tu­ally be use­ful and im­prove your ra­tio­nal­ity. If you spend lots of effort cre­at­ing true be­liefs and then com­part­men­tal­ize that knowl­edge and don’t ap­ply it, you have effec­tively gained noth­ing in terms of ra­tio­nal­ity. That’s my view any­way. I don’t know how many peo­ple would agree. An ex­am­ple is Au­mann, he knows a lot about ra­tio­nal­ity, but I don’t think he is ra­tio­nal be­cause it looks to me like he be­lieves in non over­lap­ping mag­is­te­ria.

So, yes, it’s com­pli­cated. I have is­sues with listen­ing to “It’s not ra­tio­nal to value/​de­sire this”, but I have much less is­sues with “The price for this ac­tion that you want to do is re­ally high, are you quite sure you want to pay it, that doesn’t look ra­tio­nal”. I am not sure where the proper bound­ary is.

I agree with you on this and your other points on how value is com­plex. I think that to say that: “it is ra­tio­nal to value/​de­sire this” there needs to a ‘be­cause’ af­ter that state­ment. No value/​de­sire is ra­tio­nal or ir­ra­tional in and of it­self. It is only ir­ra­tional or ir­ra­tional in a con­text. That is, be­cause of its re­la­tion to other val­ues or the costs to fulfil it etc.

Right now, I am think­ing that I need to make the base con­cepts of ra­tio­nal­ity more solid be­fore I can move into what ra­tio­nal­ity is for this com­pendium.

This is my first at­tempt at defin­ing things. My goal is to define things in a pro­gra­matic kind of way. This means that the con­cepts should fol­low: sin­gle re­spon­si­bil­ity, loose cou­pling, yagni etc. Let me know what you think.

The goal of the defi­ni­tions is just to high­light the right ar­eas in con­cept space. They are drafts and will ob­vi­ously need more de­tail. I would also need to sub­mit them as posts and see if oth­ers agree.

I am think­ing that there should be two ba­sic ar­eas: sys­tem 1 and sys­tem 2 ra­tio­nal­ity. Where ra­tio­nal­ity, in its most ba­sic form, means done well (this will need to be ex­panded upon). The goal of the two ar­eas is to define what it is we are refer­ring to when we say that some­thing is ra­tio­nal or ir­ra­tional. There are two ar­eas so that we can dis­t­in­guish ra­tio­nal­ity/​ir­ra­tional­ity in for­mal rea­son­ing vs. your in­tu­itions or what you ac­tu­ally do vs. what you think you should do.

There are also skills or gen­eral top­ics which de­scribe groups of tech­niques and meth­ods that can be used to im­prove your ra­tio­nal­ity in one or both of the two ar­eas of it. Us­ing these skills means that you ap­ply them us­ing vo­li­tional effort. It is noted, how­ever that if you use these skills of­ten enough they are likely to be­come em­bed­ded in your sys­tem 1 pro­cesses.

There may be more skills, but I think the main ones are be­low:

• Epistemic ra­tio­nal­ity—true be­liefs and all that

• In­stru­men­tal ra­tio­nal­ity - (re­stricted to rea­son­able costs)

• Value co­her­ence ra­tio­nal­ity—I gave some ex­am­ples, but it ba­si­cally means notic­ing when your val­ues and de­sires are out of al­ign­ment or could be­come so if you did some ac­tion.

• Distribu­tive ra­tio­nal­ity—this is ba­si­cally what you are talk­ing about in the above quote. Once you have a semi-suffi­cient val­u­a­tion sys­tem in place how can you ac­tu­ally dis­tribute re­sources so that you achieve what you value.

• Per­spec­ti­val ra­tio­nal­ity—no mat­ter how great you are at be­ing ra­tio­nal you are limited by the ideas that you can come up with. You are limited by your paradigms and per­spec­tives. Per­spec­ti­val ra­tio­nal­ity is about know­ing when to look at a situ­a­tion from mul­ti­ple per­spec­tives and hav­ing the abil­ity to model the ter­ri­tory or map ac­cu­rately from an­other per­spec­tive. By mod­el­ling the map from an­other per­spec­tive, it is meant that you are think­ing about what the maps of some­one else or your­self in a fu­ture or past tense would be like for a given situ­a­tion. By mod­el­ling the ter­ri­tory, it is meant that you are think­ing about what the ter­ri­tory will be like if some situ­a­tion oc­curs. An im­por­tant part of per­spec­ti­val ra­tio­nal­ity is be­ing able to co­a­lesce the in­for­ma­tion from mul­ti­ple per­spec­tives into a co­her­ent whole. The aim of prrapec­ti­val ra­tio­nal­ity is greater nov­elty in your ideas, broader util­ities in solu­tions and more prag­matic re­sults. It also in­cludes un­der­stand­ing the nec­es­sar­ily flawed and limited na­ture of your per­spec­tive. You need to con­stantly be seek­ing feed­back and other per­spec­tives. It would re­late to com­plex­ity the­ory, ag­ile soft­ware de­vel­op­ment, sys­tems dy­nam­ics, boy­dian think­ing and men­tal mod­els/​schemas/​scripts (what­ever you want to call it). I plan to write some posts around this idea.

• Com­mu­nica­tive ra­tio­nal­ity—how can you com­mu­ni­cate well. I will need to look into this one, but I think it’s im­por­tant.

• Ap­plied ra­tio­nal­ity—This re­lates to when you already know what the best thing to do is and is about how you can get your­self to ac­tu­ally do it. Ex­am­ples of this are train­ing will power or courage (do­ing some­thing you don’t want to, but be­lieve you should), deal­ing with ugh fields.

• ra­tio­nal I ba­si­cally mean rea­son­ing (sys­tem 1 and 2) done well. Where done well, is defined based on your most trusted source.

I am not sure I un­der­stand—is “most trusted source” sub­jec­tive? What if Je­sus is my most trusted source? And He is for a great deal of peo­ple.

I am talk­ing about co­her­ence which means that its “proper­ness”, I sup­pose, de­pends on its con­text, i.e. the other pre-ex­ist­ing val­ues.

Do you think it could be re­for­mu­lated in the frame­work where val­ues form tree-like net­works with some val­ues be­ing “deep” or “pri­mary” and other val­ues be­ing “shal­low” or “de­rived” or “sec­ondary”? Then you might be able to ar­gue that a con­flict be­tween a deep and a shal­low value should be re­solved by the declar­ing the shal­low value not ra­tio­nal.

I don’t see how you can view it as any­thing but a skill

I meant this more speci­fi­cally in the look­ing for a defi­ni­tion con­text.

One very com­mon way of mak­ing a defi­ni­tion is to point to a well-known class, say, Bet and then define a sub-class beta by list­ing a set of fea­tures {X} which al­low you to de­cide whether a par­tic­u­lar ob­ject b from the su­per-class Bet be­longs to the sub-class beta or not. Such defi­ni­tions are some­times called is-a-kind-of defi­ni­tions: beta is a kind of Bet.

So if we were to try to give an is-a-kind-of defin­tion of ra­tio­nal­ity, what is the su­per-class? Is it rea­son­ing? Is it skills? Some­thing else?

No value/​de­sire is ra­tio­nal or ir­ra­tional in and of it­self. It is only ir­ra­tional or ir­ra­tional in a con­text. That is, be­cause of its re­la­tion to other val­ues or the costs to fulfil it etc.

So how to avoid be­ing caught in a loop: val­ues de­pend on val­ues which de­pend on val­ues that de­pend on val­ues..?

This means that the con­cepts should fol­low: sin­gle re­spon­si­bil­ity, loose cou­pling, yagni etc.

Not sure about yagni, since it is not the case that you can always go back to a core defi­ni­tion and eas­ily up­date it for your new needs. If there’s already a struc­ture built on top of that core defi­ni­tion, chang­ing it might prove to be quite trou­ble­some. Loose cou­pling and such—sure, if you can pull it off :-) Soft­ware ar­chi­tec­ture is… much less con­strained by re­al­ity :-)

two ba­sic ar­eas: sys­tem 1 and sys­tem 2 rationality

What do you mean by sys­tem 2 ra­tio­nal­ity? In­tu­itions that work par­tic­u­larly well? Suc­cess­ful hunches?

I think the main ones are below

That’s a very wide reach. Are you sure you’re not us­ing “ra­tio­nal­ity” just as a syn­onym for “do­ing some­thing re­ally well”?

• That’s a very wide reach. Are you sure you’re not us­ing “ra­tio­nal­ity” just as a syn­onym for “do­ing some­thing re­ally well”?

I mean do well in the ar­eas I talked about be­fore. In sum­mary, I ba­si­cally mean do well at com­ing up with solu­tions to prob­lems or choos­ing/​be­ing able to go through with the best solu­tion, out of all of the solu­tions you have come up with, to a prob­lem.

I will try to define it again.

First off, there is com­pre­hen­sive ra­tio­nal­ity or nor­ma­tive ra­tio­nal­ity. This does not con­sider agent limi­ta­tions. It can be thought of as hav­ing two types.

• Pre­scient—out­comes are known and fixed. The de­ci­sion mak­ers max­imise the out­comes with the high­est util­ities (dis­counted by costs).

• Non-pre­scient—like the pre­scient model, but it in­te­grates risk and un­cer­tainty by as­so­ci­at­ing a prob­a­bil­ity dis­tri­bu­tion with the mod­els where the prob­a­bil­ity is es­ti­mated by the de­ci­sion maker.

In both cases, choices among com­pet­ing goals are han­dled by some­thing like in­differ­ence curves.

We could say that un­der the com­pre­hen­sive ra­tio­nal model a ra­tio­nal agent is one that max­i­mizes its ex­pected util­ity, given its cur­rent knowl­edge.

When we talk about ra­tio­nal­ity, though, we nor­mally mean in re­gards to hu­mans. This means that we are talk­ing about bounded ra­tio­nal­ity. Like com­pre­hen­sive ra­tio­nal­ity, bounded ra­tio­nal­ity as­sumes that agents are goal-ori­ented, but bounded ra­tio­nal­ity also takes into ac­count the cog­ni­tive limi­ta­tions of de­ci­sion mak­ers in at­tempt­ing to achieve those goals.

Bounded ra­tio­nal­ity deals with agents that are limited in many ways which in­clude be­ing:

• Un­able to de­ter­mine all out­comes. Or­ganisms with cog­ni­tive limi­ta­tions have a need to satis­fice and an in­abil­ity to con­sider long se­quen­tial out­comes that are in­ex­tri­ca­bly tied. There is also a ten­dency to fo­cus on a spe­cific set of the over­all goals or out­comes due to prim­ing.fram­ing.

• Un­able to de­ter­mine all of the per­ti­nent in­for­ma­tion.

• Un­able to de­ter­mine all of the pos­si­ble in­fer­ences.

The big differ­ence be­tween bounded ra­tio­nal­ity and nor­ma­tive ra­tio­nal­ity is that in bounded ra­tio­nal­ity you also con­sider the agent im­prov­ing its abil­ity to choose or come up with the best out­comes as ra­tio­nal as long as there are no costs or missed op­por­tu­ni­ties in­volved.. There­fore, a ra­tio­nal agent, in the bounded sense, is one that has three char­ac­ter­is­tics:

• It has a honed abil­ity to re­turn de­cent sets of out­comes from its searches for outcomes

• The ex­pected utiltity it as­signs to out­comes ac­cu­rately matches the ac­tual ex­pected utiltity

• It chooses the best out­come re­turned by its searches for out­comes. The best out­come is that one with the high­est ex­pected utiltity (dis­counted by costs)

Do you think it could be re­for­mu­lated in the frame­work where val­ues form tree-like net­works with some val­ues be­ing “deep” or “pri­mary” and other val­ues be­ing “shal­low” or “de­rived” or “sec­ondary”? Then you might be able to ar­gue that a con­flict be­tween a deep and a shal­low value should be re­solved by the declar­ing the shal­low value not ra­tio­nal.

I think that once a value is in. It is in and works just like all the oth­ers in terms of its im­pact on val­u­a­tion. How­ever, a dis­tinc­tion like the one you talked about makes sense. But, I would not have ‘deep’ and ‘shal­low’ be­cause I have no idea how to de­ter­mine that. Per­haps, ‘change­able’ vs ‘non-change­able’ would be bet­ter. Then, you can look at some con­flict­ing val­ues, i.e. ones that lead you to want op­po­site things, and ask if any of them are change­able and what the im­pact is from chang­ing them. The val­ues that re­late to what you ac­tu­ally need are non-change­able or at least would cause lan­guish­ing if you tried to re­press them. I think the prob­lem with the tree view is that val­ues are com­plex, like you were talk­ing about be­fore, one value may con­flict with mul­ti­ple other val­ues.

So how to avoid be­ing caught in a loop: val­ues de­pend on val­ues which de­pend on val­ues that de­pend on val­ues..?

I don’t see the loop. This is be­cause there is no ‘value’. There is only co­her­ence which is just how much it con­flicts with the other val­ues. I don’t know how to de­scribe this with­out an ei­de­tic ex­am­ple. Please let me know if this doesn’t work. Imag­ine one of those old style screen­savers where you have a ball mov­ing across the screen and when it hits the side of the screen it bounces in a ran­dom di­rec­tion. Now, when you have a sin­gle ball it can go in any di­rec­tion at all. There is no con­cept of co­her­ence be­cause there is only one ball. It is when you in­tro­duce an­other ball that the di­rec­tion starts to mat­ter as there is now the fac­tor of co­her­ence be­tween the balls. By co­her­ence I mean sim­ply that you don’t want the balls to hit each other. This re­stricts their move­ment and it now be­comes op­ti­mal for them to move in some kind of pat­tern with ver­ti­cal or hori­zon­tal lines be­ing the sim­plest,

What this means for val­ues is that you want them to ba­si­cally be di­rected to­wards the same or similar tar­gets or at least tar­gets that are not con­flict­ing. A po­ten­tial in­di­ca­tor of an ir­ra­tional value is one that con­flicts with other val­ues, Of course, hu­man val­ues are not co­her­ent. But, in­co­her­ence is still an in­di­ca­tor of po­ten­tial ir­ra­tional­ity.

Un­re­lated to the above ex­am­ples is that you would need to think about if the tar­get of the value is ac­tu­ally valuable and is worth the costs you have to pay to achieve it, this is harder to find out, but you can look at the fun­da­men­tal hu­man needs. Maybe, your deep vs. shal­low dis­tinc­tion would be use­ful in this con­text.

I am not sure I un­der­stand—is “most trusted source” sub­jec­tive? What if Je­sus is my most trusted source? And He is for a great deal of peo­ple.

I don’t think I am con­vey­ing this point well. I am try­ing to say that we only have an in­com­plete an­swer as to what is ra­tio­nal and that sci­ence pro­vides the best an­swer we have.

One very com­mon way of mak­ing a defi­ni­tion is to point to a well-known class,

I think in­stead of that type of defin­tion I would rather say that ra­tio­nal­ity means do­ing well in the ar­eas of X, Y and Z. and then have a list of skills or do­mains that im­prove your abil­ity in the ar­eas of ra­tio­nal­ity.

Do you think that there are many types of ra­tio­nal­ity? I think that there are many types of meth­ods to achieve ra­tio­nal­ity, but I don’t think there are many types of ra­tio­nal­ity.

So if we were to try to give an is-a-kind-of defin­tion of ra­tio­nal­ity, what is the su­per-class? Is it rea­son­ing? Is it skills? Some­thing else?

I would say rea­son­ing or maybe prob­lem solv­ing and out­come gen­er­a­tion/​choos­ing bet­ter con­vey the idea of it.

• I have a feel­ing we’re start­ing to go in cir­cles. But it was an in­ter­est­ing con­ver­sa­tion and I hope it was use­ful to you :-)

• Sorry, if I was me­an­der­ing and re­peat­ing my points. I wasn’t view­ing this as an ar­gu­ment, so I don’t view it as go­ing in cir­cles, but as go­ing through a se­ries of drafts. Maybe, I will need to be more care­ful in the fu­ture. I ap­pre­ci­ate your feed­back.

In re­gards to what we talked about, I am not re­ally that happy with how ra­tio­nal­ity is defined in the liter­a­ture, but I am also not sure of what a bet­ter way to define it would be. I guess I will have to look into the bounded types of ra­tio­nal­ity.

• No, that’s perfectly fine, I wasn’t treat­ing it as an ar­gu­ment, ei­ther. It’s just that you are spend­ing a lot of time think­ing about it, and I’m spend­ing less time, so, hav­ing made some points, I re­ally don’t have much more to con­tribute and I don’t want to fisk your think­ing notes. No need to be care­ful in drafts, that’s not what they are for :-)

• When you cric­i­t­ize Spock’s “ra­tio­nal­ity”, I think it would be bet­ter to vi­su­ally sep­a­rate the be­liefs you dis­agree with, oth­er­wise an inat­ten­tive reader might get con­fused about what ex­actly are you try­ing to say. Like this:

• “You can ex­pect ev­ery­one to re­act in a rea­son­able, or what Spock would call ra­tio­nal, way.” This ex­pec­ta­tion is ir­ra­tional be­cause…

• “You should never make a de­ci­sion un­til you have all the in­for­ma­tion.” This re­quire­ment is ir­ra­tional be­cause…

• “You should never rely on in­tu­ition.” This is ir­ra­tional be­cause…

• “You should not be­come emo­tional.” This is ir­ra­tional be­cause...

• Thanks for the sug­ges­tion. I un­der­lined the points. Hope­fully that helps.

• I agree de­no­ta­tion­ally, but ob­ject con­no­ta­tively with ‘ra­tio­nal­ity is sys­tem­ized win­ning’, so I left it out. I feel that it would take too long to get rid of the con­no­ta­tion of com­pe­ti­tion that I be­lieve is as­so­ci­ated with ‘win­ning’. The other point that would need to be delved into is: what ex­actly does the ra­tio­nal­ist win at? I be­lieve by win­ning Elizer meant win­ning at new­comb’s prob­lem, but the idea of win­ning is nor­mally ex­tended into ev­ery­thing.

I think that Eliezer has dis­avowed us­ing this state­ment pre­cisely be­cause of the con­no­ta­tions that peo­ple as­so­ci­ate with it.

It is be­cause of this that ra­tio­nal­ity is of­ten con­sid­ered to be split into two parts: nor­ma­tive and de­scrip­tive ra­tio­nal­ity.

What hap­pened to pre­scrip­tive ra­tio­nal­ity?

• What hap­pened to pre­scrip­tive ra­tio­nal­ity?

I added in a quote from Baron.

• 23 Aug 2015 12:42 UTC
1 point

This is more like a glos­sary than a primer

• A glos­sary is just an alpha­bet­i­cal list of words re­lat­ing to a spe­cific sub­ject, text, or di­alect, with ex­pla­na­tions; a brief dic­tio­nary. I sup­pose the wiki part is sort of like a glos­sary, but over­all I don’t think these posts are a glos­sary. I think you are right, though, that primer is not the best word for it. I changed it to com­pendium. A com­pendium is a col­lec­tion of con­cise but de­tailed in­for­ma­tion about a par­tic­u­lar sub­ject.

• Com­pendium sounds far more ac­cu­rate than both your origi­nal term and my origi­nal sug­ges­tion. Nicely done. New­comb’s prob­lem cer­tainly isn’t some­thing I’d ex­pect when read­ing a in­tro­duc­tion to what less­wrong is about.

• This is fan­tas­tic! Keep ’em com­ing!