Rationality Compendium: Principle 1 - A rational agent, given its capabilities and the situation it is in, is one that thinks and acts optimally

A perfect rationalist is an ideal thinker. Rationality , however, is not the same as perfection. Perfection guarantees optimal outcomes. Rationality only guarantees that the agent will, to the utmost of their abilities, reason optimally. Optimal reasoning cannot, unfortunately, guarantee optimal outcomes. This is because most agents are not omniscient or omnipotent. They are instead fundamentally and inexorably limited. To be fair to such agents, the definition of rationality that we use should take this into account. Therefore, a rational agent will be defined as: an agent that, given its capabilities and the situation it is in, thinks and acts optimally. Although it is noted that rationality does not guarantee the best outcome, a rational agent will most of the time achieve better outcomes than those of an irrational agent.

Rationality is often considered to be split into three parts: normative, descriptive and prescriptive rationality.

Normative rationality describes the laws of thought and action. That is, how a perfectly rational agent with unlimited computing power, omniscience etc. would reason and act. Normative rationality basically describes what is meant by the phrase “optimal reasoning”. Of course, for limited agents true optimal reasoning is impossible and they must instead settle for bounded optimal reasoning, which is the closest approximation to optimal reasoning that is possible given the information available to the agent and the computational abilities of the agent. The laws of thought and action (what we currently believe optimal reasoning involves) are::

  • Logic - math and logic are deductive systems, where the conclusion of a successful argument follows necessarily from its premises, given the axioms of the system you’re using: number theory, geometry, predicate logic, etc.

  • Probability theory - is essentially an extension of logic. Probability is a measure of how likely a proposition is to be true, given everything else that you already believe. Perhaps, the most useful rule to be derived from the axioms of probability theory is Bayes’ Theorem , which tells you exactly how your probability for a statement should change as you encounter new information. Probability is viewed from one of two perspectives: the Bayesian perspective which sees probability as a measure of uncertainty about the world and the Frequentist perspective which sees probability as the proportion of times the event would occur in a long run of repeated experiments. Less wrong follows the Bayesian perspective.

  • Decision theory - is about choosing actions based on the utility function of the possible outcomes. The utility function is a measure of how much you desire a particular outcome. The expected utility of an action is simply the average utility of the action’s possible outcomes weighted by the probability that each outcome occurs. Decision theory can be divided into three parts:

    • Normative decision theory studies what an ideal agent (a perfect agent, with infinite computing power, etc.) would choose.

    • Descriptive decision theory studies how non-ideal agents (e.g. humans) actually choose.

    • Prescriptive decision theory studies how non-ideal agents can improve their decision-making (relative to the normative model) despite their imperfections.

Descriptive rationality describes how people normally reason and act. It is about understanding how and why people make decisions. As humans, we have certain limitations and adaptations which quite often makes it impossible for us to be perfectly rational in the normative sense of the word. It is because of this that we must satisfice or approximate the normative rationality model as best we can. We engage in what’s called bounded, ecological or grounded rationality . Unless explicitly stated otherwise, ‘rationality’ in this compendium will refer to rationality in the bounded sense of the word. In this sense, it means that the most rational choice for an agent depends on the agents capabilities and the information that is available to it. The most rational choice for an agent is not necessarily the most certain, true or right one. It is just the best one given the information and capabilities that the agent has. This means that an agent that satisfices or uses heuristics may actually be reasoning optimally, given its limitations, even though satisficing and heuristics are shortcuts that are potentially error prone.

Prescriptive or applied rationality is essentially about how to bring the thinking of limited agents closer to what the normative model stipulates. It is described by Baron in Thinking and Deciding pg.34:

In short, normative models tell us how to evaluate judgments and decisions in terms of their departure from an ideal standard. Descriptive models specify what people in a particular culture actually do and how they deviate from the normative models. Prescriptive models are designs or inventions, whose purpose is to bring the results of actual thinking into closer conformity to the normative model. If prescriptive recommendations derived in this way are successful, the study of thinking can help people to become better thinkers.

The behaviours and thoughts that we consider to be rational for limited agents is much larger than those for the perfect, i.e. unlimited, agents. This is because for the limited agents we need to take into account, not only those thoughts and behaviours which are optimal for the agent, but also those thoughts and behaviours which allow the limited agent to improve their reasoning. It is for this reason that we consider curiousity, for example, to be rational as it often leads to situations in which the agents improve their internal representations or models of the world. We also consider wise resource allocation to be rational because limited agents only have a limited amount of resources available to them. Therefore, if they can get a greater return on investment on the resources that they do use then they will be more likely to be able to get closer to thinking optimally in a greater number of domains.

We also consider the rationality of particuar choices to be something that is in a state of flux. This is because the rationality of choices depends on the information that an agent has access to and this is something which is frequently changing. This hopefully highlights an important fact. If an agent is suboptimal in its ability to gather information, then it will often end up with different information than an agent with optimal informational gathering abilities would. In short, this is a problem for the suboptimal (irrational) agent as it means that its rational choices are going to differ more from the perfect normative agents than the rational agents would. The closer an agents rational choices are to the rational choices of a perfect normative agent the more that the agent is rational.

It can also be said that the rationality of an agent depends in large part on the agents truth seeking abilities. The more accurate and up to date the agents view of the world the closer its rational choices will be to those of the perfect normative agents. It is because of this that a rational agent is one that is inextricably tied to the world as it is. It does not see the world as it wishes it, fears it or has seen it to be, but instead constantly adapts to and seeks out feedback from interactions with the world. The rational agent is attuned to the current state of affairs. One other very important characteristic of rational agents is that they adapt. If the situation has changed and the previously rational choice is no longer the one with the greatest expected utility, then the rational agent will adapt and change its preferred choice to the one that is now the most rational.

The other important part of rationality, besides truth seeking, is that it is about maximising the ability to actually achieve important goals. These two parts or domains of rationality: truth seeking and goal reaching are referred to as epistemic and instrumental rationality.

  • Epistemic rationality is about the ability to form true beliefs. It is governed by the laws of logic and probability theory.

  • Instrumental rationality is about the ability to actually achieve the things that matter to you. It is governed by the laws of decision theory. In a formal context, it is known as maximizing “expected utility”. It important to note that it is about more than just reaching goals. It is also about discovering how to develop optimal goals.

As you move further and further away from rationality you introduce more and more flaws, inefficiencies and problems into your decision making and information gathering algorithms. These flaws and inefficiencies are the cause of irrational or suboptimal behaviors, choices and decisions. Humans are innately irrational in a large number of areas which is why, in large part, improving our rationality is just about mitigating, as much as possible, the influence of our biases and irrational propensities.

If you wish to truly understand what it means to be rational, then you must also understand what rationality is not. This is important because the concept of rationality is often misconstrued by the media. An epitomy of this misconstrual is the character of Spock from Star Trek. This character does not see rationality as if it was about optimality, but instead as if it means that :

  • You can expect everyone to react in a reasonable, or what Spock would call rational, way. This is irrational because it leads to faulty models and predictions of other peoples behaviors and thoughts.

  • You should never make a decision until you have all the information. This is irrational because humans are not omniscient or omnipotent. Their decisions are constrained by many factors like the amount of information they have, the cognitive limitations of their brains and the time available for them to make decisions. This means that a person if they are to act rationally must often make predictions and assumptions.

  • You should never rely on intuition. This is irrational because intuition (system 1 thinking) does have many advantages over conscious and effortful deliberation (system 2 thinking) mainly its speed. Although intuitions can be wrong, to disregard them entirely is to hinder yourself immensely. If your intuitions are based on multiple interactions that are similar to the current situation and these interactions had short feedback cycles, then it is often irrational to not rely on your intuitions.

  • You should not become emotional. This is irrational because while it is true that emotions can cause you to use less rational ways of thinking and acting, i.e. ways that are optimised for ancestral or previous environments, it does not mean that we should try to eradicate emotions in ourselves. This is because emotions are essential to rational thinking and normal social behavior . An aspiring rationalist should remember four points in regards to emotions:

    • The rationality of emotions depends on the rationality of the thoughts and actions that they induce. It is rational to feel fear when you are actually in a situation where you are threatened. It is irrational to feel fear in situations where are not being threatened. If your fear compels you to take suboptimal actions, then and only then is that fear irrational.

    • Emotions are the wellspring of value. A large part of instrumental rationality is about finding the best way to achieve your fundamental human needs. A person who can fulfill these needs through simple methods is more rational than someone who can’t. In this particular area people tend to become alot less rational as they age. As adults we should be jealous of the innocent exuberance that comes so naturally to children. If we are not as exuberant as children, then we should wonder at how it is that we have become so shackled by our own self restraint.

    • Emotional control is a virtue, but denial is not. Emotions can be considered a type of internal feedback. A rational person does not be consciously ignore or avoid feedback as this means that would be limiting or distorting the information that they have access to. It is possible that a rational agent may may need to mask or hide their emotions for reasons related to societal norms and status, but they should not repress emotions unless there is some overriding rational reason to do so. If a person volitionally represses their emotions because they wish to perpetually avoid them, then this is both irrational and cowardly.

    • By ignoring, avoiding and repressing emotions you are limiting the information that you exhibit, which means that other people will not know how you are actually feeling. In some situations this may be helpful, but it is important to remember that people are not mind readers. Their ability to model your mind and your emotional state depends on the information that they know about you and the information, e.g. body language, vocal inflections, that you exhibit. If people do not know that you are vulnerable, then they cannot know that you are courageous. If people do not know that you are in pain, then they cannot know that you need help.

  • You should only value quantifiable things like money, productivity, or efficiency. This is irrational because it means that you are reducing the amount of potentially valuable information that you consider. The only reason a rational person ever reduces the amount of information that they consider is because of resource or time limitations.


Related Materials

Wikis:

  • Rationality—the characteristic of thinking and acting optimally. An agent is rational if it wields its intelligence in such a way as to maximize the convergence between its beliefs and reality; and acts on these beliefs in such a manner as to maximize its chances of achieving whatever goals it has. For humans, this means mitigating (as much as possible) the influence of cognitive biases.

  • Maths/​Logic—Math and logic are deductive systems, where the conclusion of a successful argument follows necessarily from its premises, given the axioms of the system you’re using: number theory, geometry, predicate logic, etc.

  • Probability theory—a field of mathematics which studies random variables and processes.

  • Bayes theorem—a law of probability that describes the proper way to incorporate new evidence into prior probabilities to form an updated probability estimate.

  • Bayesian—Bayesian probability theory is the math of epistemic rationality, Bayesian decision theory is the math of instrumental rationality.

  • Bayesian probability—represents a level of certainty relating to a potential outcome or idea. This is in contrast to a frequentist probability that represents the frequency with which a particular outcome will occur over any number of trials. An event with Bayesian probability of .6 (or 60%) should be interpreted as stating “With confidence 60%, this event contains the true outcome”, whereas a frequentist interpretation would view it as stating “Over 100 trials, we should observe event X approximately 60 times.” The difference is more apparent when discussing ideas. A frequentist will not assign probability to an idea; either it is true or false and it cannot be true 6 times out of 10.

  • Bayesian Decision theory—Bayesian decision theory refers to a decision theory which is informed by Bayesian probability

  • Decision theory – is the study of principles and algorithms for making correct decisions—that is, decisions that allow an agent to achieve better outcomes with respect to its goals.

  • Hollywood rationality- What Spock does, not what actual rationalists do.

Posts:

Suggested posts to write:

  • Bounded/​ecological/​grounded Rationality—I couldn’t find a suitable resource for this on less wrong.

Academic Books:

Popular Books:

Talks:

Notes on decisions I have made while creating this post

(these notes will not be in the final draft):

  • I agree denotationally, but object connotatively with ‘rationality is systemized winning’, so I left it out. I feel that it would take too long to get rid of the connotation of competition that I believe is associated with ‘winning’. The other point that would need to be delved into is: what exactly does the rationalist win at? I believe by winning Elizer meant winning at newcomb’s problem, but the idea of winning is normally extended into everything. I also believe that I have basically covered the idea with: “Rationality maximizes expected performance, while perfection maximizes actual performance.”

  • I left out the 12 virtues of rationality because I don’t like perfectionism. If it was not in the virtues, then I would have included them. My problem with perfectionism is that having it as a goal makes you liable to premature optimization and developing tendencies for suboptimal levels of adaptability. Everything I have read in complexity theory, for example, makes me think that perfectionism is not really a good thing to be aiming for, at least in uncertain and complex situations. I think truth seeking should be viewed as an optimization process. If it doesn’t allow you to become more optimal, then it is not worth it. I have a post about this here.

  • I couldn’t find an appropriate link for bounded/​ecological/​grounded rationality.