Beginning at the Beginning

I can’t help but notice that some people are utilizing some very peculiar and idiosyncratic meanings for the word ‘rational’ in their posts and comments. In many instances, the correctness of rationality is taken for granted; in others, the process of being rational is not only ignored, but dispensed with alltogether, and rational is defined as ‘that which makes you win’.

That’s not a very useful definition. If I went to someone looking for helping selecting between options, and was told to choose “the best one”, or “the right one”, or “the one that gives you the greatest chance of winning”, what help would I have received? If I had clear ideas about how to determine the best, the right, or the one that would win, I wouldn’t have come looking for help in the first place. The responses provide no operational assistance.

There is a definite lack of understanding here of what rationality is, much less why it is correct, and this general incomprehension can only cripple attempts to discuss its nature or how to apply it. We might think that this site would try to dispel the fog surrounding the concept. Remarkably, a blog established to help “refining the art of human rationality” neither explains nor defines rationality.

Those are absolutely critical goals if lesswrong is to accomplish what it advertises itself as attempting. So let’s try to reach them.


The human mind is at the same time both extremely sophisticated and shockingly primitive. Most of its operations take place beneath the level of explicit awareness; we don’t know how we reach conclusions and make decisions, we’re merely presented with the results along with an emotional sense of rightness or confidence.

Despite these emotional assurances, we sometimes suspect that such feelings are unfounded. Careful examination shows that to be precisely the case. We can and do develop confidence in results, not because they are reliable, but for a host of other reasons.

Our approval or disapproval of some properties can cross over into our evaluation of others. We can fall prey to shortcuts while believing that we’ve been thorough. We tend to interpret evidence in terms of our preferences, perceiving what we want to perceive and screening out evidence we find inconvenient or uncomfortable. Sometimes, we even construct evidence out of whole cloth to support something we want to be true.

It’s very difficult to detect these flaws in ourselves as we make them. It is somewhat easier to detect them in others, or in hindsight while reflecting upon past decisions which we are no longer strongly emotionally involved in. Without knowing how our decisions are reached, though, we’re helpless in the face of impulses and feelings of the moment, even while we’re ultimately skeptical about how our judgment functions.

So how can we try to improve our judgment if we don’t even know what it’s doing?

How did Aristotle establish the earliest-known examination of the principles of justification? If he originated the foundation of the systems we know as *logic*, how could that be accomplished without the use of logic?

As Aristotle noted, the principles he made into a set of formal rules already existed. He observed the arguments of others, noting how people defended positions and attacked the positions of others, and how certain arguments had flaws that could be pointed out while others seemed to possess no counters. His attempts to organize people’s implicit understandings of the validity of arguments led to an explicit, formal system. The principles of logic were implicit before they were understood explicitly.

The brain is capable of performing astounding feats of computation, but our conscious grasp of mathematics is emulated and lacks the power of the system that creates it. We can intuitively comprehend how a projectile will move from just a glimpse of its trajectory, although solving the explicit partial differential equation that describes that motion is terrifically difficult, and virtually impossible to accomplish in real-time. Yet our explicit grasp of mathematics makes it possible for us to solve problems and comprehend ideas completely beyond the capacity of our hunter-gatherer ancestors, even though the processing power of our brains does not appear to have changed from those early days.

In the same way, our models of what proper thought means give us options and opportunities far beyond what our intuitive, unconscious reasoning makes possible, even though the conscious understanding works with much fewer resources than the unconscious.

When we consciously and deliberately model the evolution of one statement into another according to elementary rules that make up the foundation of logical consistency, something new and exciting happens. The self-referential aspects of that modeling permit us to compare the decisions presented to us by the parts of our minds beneath the threshold of our awareness and override them. We can evaluate our own evaluations, reaching conclusions that our emotions don’t lead us to and rejecting some of those that they do.

That’s what rationality is: having explicit and conscious standards of validity, and applying them in a systematic way. It doesn’t matter if we possess an inner conviction that something is true—if we can’t demonstrate that it can be generated from basic principles according to well-defined rules, it’s not valid.

What makes this so interesting is that it’s self-correcting. If we observe an empirical relationship that our understanding doesn’t predict, we can treat it as a new fact. For example, let’s say that we find that certain manipulations of tarot decks permit us to predict the weather, even though we have no idea of why the two should be correlated at all. With rationality, we don’t need to know why. Once we’ve recognized that the relationship exists, it becomes rational for us to use it. Likewise, if a previously-useful relationship suddenly ceases to be, even though we have no theoretical grounds for expecting that to happen, we simply acknowledge the fact. Once we’ve done so, we can justify ignoring that which we previously considered to be evidence.

Human reasoning is especially plagued by superstitions, because it’s easy for us to accept contradictory principles without acknowledging the inconsistency. But when we’re forced to construct step-by-step justifications for our beliefs, contradiction is thrown into sharp relief, and can’t be ignored.

Arguments that are not made explicitly, with conscious awareness of how each point is derived from fundamental principles and empirical observations, may or may not be correct. But they’re never rational. Rational reasoning does not guarantee correctness; rational choice does not guarantee victory. What rationality offers is self-knowledge of validity. If rational standards are maintained when thinking, the best choice as defined by the knowledge we possess will be made. Whether it will be best when we gain new knowledge, or in some absolute sense, is unknown and unknowable until that moment comes.

Yet those who speak here often of the value of human rationality frequently don’t do so by rational means. They make implicit arguments with hidden assumptions and do not acknowledge or clarify them. They emphasize the potential for rationality to bootstrap itself to greater and greater levels of understanding, yet don’t concern themselves with demonstrating that their arguments arise from the most basic elements of reason. Rationality starts when we make a conscious attempt to understand and apply those basic elements, to emulate in our minds the principles that make the existence of our minds possible.

Are we doing so?