Ra­tion­al­ity: A-Z

Rationality: A-Z (or “The Sequences”) is a series of blog posts by Eliezer Yudkowsky on human rationality and irrationality in cognitive science. It is an edited and reorganized version of posts published to Less Wrong and Overcoming Bias between 2006 and 2009. This collection serves as a long-form introduction to formative ideas behind Less Wrong, the Machine Intelligence Research Institute, the Center for Applied Rationality, and substantial parts of the effective altruist community. Each book also comes with an introduction by Rob Bensinger and a supplemental essay by Yudkowsky.

The first two books, Map and Territory and How to Actually Change Your Mind, are available on Amazon (printed and e-book version).

The entire collection is available as an e-book and audiobook. A number of alternative reading orders for the essays can be found here, and a compilation of all of Eliezer’s blogposts up to 2010 can be found here.

Map and Ter­ri­tory

What is a belief, and what makes some beliefs work better than others? These four sequences explain the Bayesian notions of rationality, belief, and evidence. A running theme: the things we call “explanations” or “theories” may not always function like maps for navigating the world. As a result, we risk mixing up our mental maps with the other objects in our toolbox.

Pre­dictably Wrong

This, the first book of “Rationality: AI to Zombies” (also known as “The Sequences”), begins with cognitive bias. The rest of the book won’t stick to just this topic; bad habits and bad ideas matter, even when they arise from our minds’ contents as opposed to our minds’ structure.

It is cognitive bias, however, that provides the clearest and most direct glimpse into the stuff of our psychology, into the shape of our heuristics and the logic of our limitations. It is with bias that we will begin.

Preface

Bi­ases: An Introduction

Scope Insensitivity

The Mar­tial Art of Rationality

Availability

What’s a Bias?

Bur­den­some Details

What Do We Mean By “Ra­tion­al­ity”?

Plan­ning Fallacy

Why Truth?

Feel­ing Rational

The Lens That Sees Its Flaws

Fake Beliefs

An account of irrationality would be incomplete if it provided no theory about how rationality works—or if its “theory” only consisted of vague truisms, with no precise explanatory mechanism. This sequence asks why it’s useful to base one’s behavior on “rational” expectations, and what it feels like to do so.

Mak­ing Beliefs Pay Rent (in An­ti­ci­pated Ex­pe­riences)

A Fable of Science and Politics

Belief in Belief

Reli­gion’s Claim to be Non-Disprovable

Pro­fess­ing and Cheering

Belief as Attire

Pre­tend­ing to be Wise

Ap­plause Lights

Notic­ing Confusion

Fo­cus Your Uncertainty

What is Ev­i­dence?

Scien­tific Ev­i­dence, Le­gal Ev­i­dence, Ra­tional Evidence

How Much Ev­i­dence Does It Take?

Ein­stein’s Arrogance

Oc­cam’s Razor

Your Strength as a Rationalist

Ab­sence of Ev­i­dence Is Ev­i­dence of Absence

Con­ser­va­tion of Ex­pected Evidence

Hind­sight De­val­ues Science

Illu­sion of Trans­parency: Why No One Un­der­stands You

Ex­pect­ing Short In­fer­en­tial Distances

Mys­te­ri­ous Answers

This sequence asks whether science resolves the problems raised so far. Scientists base their models on repeatable experiments, not speculation or hearsay. And science has an excellent track record compared to anecdote, religion, and . . . pretty much everything else. Do we still need to worry about “fake” beliefs, confirmation bias, hindsight bias, and the like when we’re working with a community of people who want to explain phenomena, not just tell appealing stories?

Fake Explanations

Guess­ing the Teacher’s Password

Science as Attire

Fake Causality

Se­man­tic Stopsigns

Mys­te­ri­ous An­swers to Mys­te­ri­ous Questions

The Fu­til­ity of Emergence

Say Not “Com­plex­ity”

Pos­i­tive Bias: Look Into the Dark

Lawful Uncertainty

My Wild and Reck­less Youth

Failing to Learn from History

Mak­ing His­tory Available

Ex­plain/​Wor­ship/​Ig­nore?

“Science” as Cu­ri­os­ity-Stopper

Truly Part Of You

1. Interlude

The Sim­ple Truth

How to Ac­tu­ally Change Your Mind

This truth thing seems pretty handy. Why, then, do we keep jumping to conclusions, digging our heels in, and recapitulating the same mistakes? Why are we so bad at acquiring accurate beliefs, and how can we do better? These seven sequences discuss motivated reasoning and confirmation bias, with a special focus on hard-to-spot species of self-deception and the trap of “using arguments as soldiers”.

Overly Con­ve­nient Excuses

This sequence focuses on questions that are as probabilistically clear-cut as questions get. The Bayes-optimal answer is often infeasible to compute, but errors like confirmation bias can take root even in cases where the available evidence is overwhelming and we have plenty of time to think things over.

Ra­tion­al­ity: An Introduction

Tsuyoku Nar­i­tai! (I Want To Be­come Stronger)

The Proper Use of Humility

Tsuyoku vs. the Egal­i­tar­ian Instinct

The Third Alternative

Lot­ter­ies: A Waste of Hope

New Im­proved Lottery

But There’s Still A Chance, Right?

The Fal­lacy of Gray

Ab­solute Authority

How to Con­vince Me That 2 + 2 = 3

In­finite Certainty

0 And 1 Are Not Probabilities

Your Ra­tion­al­ity is My Business

Poli­tics and Rationality

Now we move into a murkier area. Mainstream national politics, as debated by TV pundits, is famous for its angry, unproductive discussions. On the face of it, there’s something surprising about that. Why do we take political disagreements so personally, even when the machinery and effects of national politics are so distant from us in space or in time? For that matter, why do we not become more careful and rigorous with the evidence when we’re dealing with issues we deem important?

Poli­tics is the Mind-Killer

Policy De­bates Should Not Ap­pear One-Sided

The Scales of Jus­tice, the Note­book of Rationality

Cor­re­spon­dence Bias

Are Your Ene­mies In­nately Evil?

Re­v­ersed Stu­pidity Is Not Intelligence

Ar­gu­ment Screens Off Authority

Hug the Query

Ra­tion­al­ity and the English Language

Hu­man Evil and Mud­dled Thinking

Against Rationalization

The last sequence focused in on how feeling tribal often distorts our ability to reason. Now we’ll explore one particular cognitive mechanism that causes this: much of our reasoning process is really rationalization—story-telling that makes our current beliefs feel more coherent and justified, without necessarily improving their accuracy.

Know­ing About Bi­ases Can Hurt People

Up­date Your­self Incrementally

One Ar­gu­ment Against An Army

The Bot­tom Line

What Ev­i­dence Filtered Ev­i­dence?

Rationalization

A Ra­tional Argument

Avoid­ing Your Belief’s Real Weak Points

Mo­ti­vated Stop­ping and Mo­ti­vated Continuation

Fake Justification

Is That Your True Re­jec­tion?

En­tan­gled Truths, Con­ta­gious Lies

Of Lies and Black Swan Blowups

Dark Side Epistemology

Against Doublethink

This short sequence explores another cognitive pattern that hinders our ability to update on evidence: George Orwell’s ‘doublethink’ - the attempt to deceive oneself.

Dou­ble­think (Choos­ing to be Bi­ased)

No, Really, I’ve De­ceived Myself

Belief in Self-Deception

Moore’s Paradox

Don’t Believe You’ll Self-Deceive

See­ing with Fresh Eyes

On the challenge of recognizing evidence that doesn’t fit our expectations and assumptions.

An­chor­ing and Adjustment

Prim­ing and Contamination

Do We Believe Every­thing We’re Told?

Cached Thoughts

Origi­nal Seeing

The Virtue of Narrowness

Stranger Than History

The Log­i­cal Fal­lacy of Gen­er­al­iza­tion from Fic­tional Evidence

We Change Our Minds Less Often Than We Think

Hold Off On Propos­ing Solutions

The Ge­netic Fallacy

Death Spirals

Leveling up in rationality means encountering a lot of interesting and powerful new ideas. In many cases, it also means making friends who you can bounce ideas off of and finding communities that encourage you to better yourself. This sequence discusses some important hazards that can afflict groups united around common interests and amazing shiny ideas, which will need to be overcome if we’re to get the full benefits out of rationalist communities.

The Affect Heuristic

Evalua­bil­ity (And Cheap Holi­day Shop­ping)

Un­bounded Scales, Huge Jury Awards, & Futurism

The Halo Effect

Su­per­hero Bias

Affec­tive Death Spirals

Re­sist the Happy Death Spiral

Un­crit­i­cal Supercriticality

Eva­po­ra­tive Cool­ing of Group Beliefs

When None Dare Urge Restraint

Every Cause Wants To Be A Cult

Two Cult Koans

Asch’s Con­for­mity Experiment

On Ex­press­ing Your Concerns

Lonely Dissent

Cul­tish Countercultishness

Let­ting Go

Our natural state isn’t to change our minds like a Bayesian would. Getting the people in opposing tribes to notice what they’re really seeing won’t be as easy as reciting the axioms of probability theory to them. As Luke Muehlhauser writes, in The Power of Agency:

You are not a Bayesian homunculus whose reasoning is “corrupted” by cognitive biases.

You just are cognitive biases.

Confirmation bias, status quo bias, correspondence bias, and the like are not tacked on to our reasoning; they are its very substance.

That doesn’t mean that debiasing is impossible. We aren’t perfect calculators underneath all our arithmetic errors, either. Many of our mathematical limitations result from very deep facts about how the human brain works. Yet we can train our mathematical abilities; we can learn when to trust and distrust our mathematical intuitions, and share our knowledge, and help one another; we can shape our environments to make things easier on us, and build tools to offload much of the work.

Our biases are part of us. But there is a shadow of Bayesianism present in us as well, a flawed apparatus that really can bring us closer to truth. No homunculus—but still, some truth. Enough, perhaps, to get started.

Singlethink

The Im­por­tance of Say­ing “Oops”

The Crack­pot Offer

Just Lose Hope Already

The Proper Use of Doubt

You Can Face Reality

The Med­i­ta­tion on Curiosity

No One Can Ex­empt You From Ra­tion­al­ity’s Laws

Leave a Line of Retreat

Cri­sis of Faith

The Ritual

The Ma­chine in the Ghost

Why haven’t we evolved to be more rational? Even taking into account resource constraints, it seems like we could be getting a lot more epistemic bang for our evidential buck. To get a realistic picture of how and why our minds execute their biological functions, we need to crack open the hood and see how evolution works, and how our brains work, with more precision. These three sequences illustrate how even philosophers and scientists can be led astray when they rely on intuitive, non-technical evolutionary or psychological accounts. By locating our minds within a larger space of goal-directed systems, we can identify some of the peculiarities of human reasoning and appreciate how such systems can “lose their purpose”.

The Sim­ple Math of Evolution

The first sequence of The Machine in the Ghost aims to communicate the dissonance and divergence between our hereditary history, our present-day biology, and our ultimate aspirations. This will require digging deeper than is common in introductions to evolution for non-biologists, which often restrict their attention to surface-level features of natural selection.

Minds: An Introduction

The Power of Intelligence

An Alien God

The Won­der of Evolution

Evolu­tions Are Stupid (But Work Any­way)

No Evolu­tions for Cor­po­ra­tions or Nanodevices

Evolv­ing to Extinction

The Tragedy of Group Selectionism

Fake Op­ti­miza­tion Criteria

Adap­ta­tion-Ex­e­cuters, not Fit­ness-Maximizers

Evolu­tion­ary Psychology

An Espe­cially Ele­gant Evpsych Experiment

Su­per­stim­uli and the Col­lapse of Western Civilization

Thou Art Godshatter

Frag­ile Purposes

This sequence abstracts from human cognition and evolution to the idea of minds and goal-directed systems at their most general. These essays serve the secondary purpose of explaining the author’s general approach to philosophy and the science of rationality, which is strongly informed by his work in AI.

Belief in Intelligence

Hu­mans in Funny Suits

Op­ti­miza­tion and the In­tel­li­gence Explosion

Ghosts in the Machine

Ar­tifi­cial Addition

Ter­mi­nal Values and In­stru­men­tal Values

Leaky Generalizations

The Hid­den Com­plex­ity of Wishes

An­thro­po­mor­phic Optimism

Lost Purposes

A Hu­man’s Guide to Words

This sequence discusses the basic relationship between cognition and concept formation. 37 Ways That Words Can Be Wrong is a guide to the sequence.

The Parable of the Dagger

The Parable of Hemlock

Words as Hid­den Inferences

Ex­ten­sions and Intensions

Similar­ity Clusters

Typ­i­cal­ity and Asym­met­ri­cal Similarity

The Cluster Struc­ture of Thingspace

Dis­guised Queries

Neu­ral Categories

How An Al­gorithm Feels From Inside

Disput­ing Definitions

Feel the Meaning

The Ar­gu­ment from Com­mon Usage

Empty Labels

Ta­boo Your Words

Re­place the Sym­bol with the Substance

Fal­la­cies of Compression

Cat­e­go­riz­ing Has Consequences

Sneak­ing in Connotations

Ar­gu­ing “By Defi­ni­tion”

Where to Draw the Boundary?

En­tropy, and Short Codes

Mu­tual In­for­ma­tion, and Den­sity in Thingspace

Su­per­ex­po­nen­tial Con­ceptspace, and Sim­ple Words

Con­di­tional In­de­pen­dence, and Naive Bayes

Words as Men­tal Paint­brush Handles

Vari­able Ques­tion Fallacies

37 Ways That Words Can Be Wrong

1. Interlude

An In­tu­itive Ex­pla­na­tion of Bayes’s Theorem

Mere Reality

What kind of world do we live in? What is our place in that world? Building on the previous sequences’ examples of how evolutionary and cognitive models work, these six sequences explore the nature of mind and the character of physical law. In addition to applying and generalizing past lessons on scientific mysteries and parsimony, these essays raise new questions about the role science should play in individual rationality.

Lawful Truth

Just as it was useful to contrast humans as goal-oriented systems with inhuman processes in evolutionary biology and artificial intelligence, it will be useful in the coming sequences of essays to contrast humans as physical systems with inhuman processes that aren’t mind-like.

We humans are, after all, built out of inhuman parts. The world of atoms looks nothing like the world as we ordinarily think of it, and certainly looks nothing like the world’s conscious denizens as we ordinarily think of them. As Giulio Giorello put the point in an interview with Daniel Dennett: “Yes, we have a soul. But it’s made of lots of tiny robots.”

We start with a sequence on the basic links between physics and human cognition.

The World: An Introduction

Univer­sal Fire

Univer­sal Law

Is Real­ity Ugly?

Beau­tiful Probability

Out­side the Laboratory

The Se­cond Law of Ther­mo­dy­nam­ics, and Eng­ines of Cognition

Per­pet­ual Mo­tion Beliefs

Search­ing for Bayes-Structure

Re­duc­tion­ism 101

Dis­solv­ing the Question

Wrong Questions

Right­ing a Wrong Question

Mind Pro­jec­tion Fallacy

Prob­a­bil­ity is in the Mind

The Quo­ta­tion is not the Referent

Qual­i­ta­tively Confused

Think Like Reality

Chaotic Inversion

Reductionism

Ex­plain­ing vs. Ex­plain­ing Away

Fake Reductionism

Sa­vanna Poets

Joy in the Merely Real

..Do not all charms fly

At the mere touch of cold philosophy?

There was an awful rainbow once in heaven:

We know her woof, her texture; she is given

In the dull catalogue of common things.

—John Keats, Lamia

Joy in the Merely Real

Joy in Discovery

Bind Your­self to Reality

If You De­mand Magic, Magic Won’t Help

Mun­dane Magic

The Beauty of Set­tled Science

Amaz­ing Break­through Day: April 1st

Is Hu­man­ism A Reli­gion-Sub­sti­tute?

Scarcity

The Sa­cred Mundane

To Spread Science, Keep It Secret

Ini­ti­a­tion Ceremony

Phys­i­cal­ism 201

Can we ever know what it’s like to be a bat? Traditional dualism, with its immaterial souls freely floating around violating physical laws, may be false; but what about the weaker thesis, that consciousness is a “further fact” not fully explainable by the physical facts? A number of philosophers and scientists have found this line of reasoning persuasive. If we feel this argument’s intuitive force, should we grant its conclusion and ditch physicalism?

We certainly shouldn’t reject it just because it sounds strange or feels vaguely unscientific. But how does the argument stand up to a technical understanding of how explanation and belief work? Are there any hints we can take from the history of science, or from our understanding of the physical mechanisms underlying evidence? This is the question that this sequence will attempt to answer.

Hand vs. Fingers

An­gry Atoms

Heat vs. Motion

Brain Break­through! It’s Made of Neu­rons!

When An­thro­po­mor­phism Be­came Stupid

A Priori

Re­duc­tive Reference

Zom­bies! Zom­bies?

Zom­bie Responses

The Gen­er­al­ized Anti-Zom­bie Principle

GAZP vs. GLUT

Belief in the Im­plied Invisible

Zom­bies: The Movie

Ex­clud­ing the Supernatural

Psy­chic Powers

Quan­tum Physics and Many Worlds

Quantum mechanics is our best mathematical model of the universe to date, powerfully confirmed by a century of tests. However, interpreting what the experimental results mean—how and when the Schrödinger equation and Born’s rule interact—is a topic of much contention, with the main disagreement being between the Everett and the Copenhagen interpretations.

Yudkowsky uses this scientific controversy as a proving ground for some central ideas from previous sequences: map-territory distinctions, mysterious answers, Bayesianism, and Occam’s Razor.

Quan­tum Explanations

Con­figu­ra­tions and Amplitude

Joint Configurations

Distinct Configurations

Col­lapse Postulates

De­co­her­ence is Simple

De­co­her­ence is Falsifi­able and Testable

Priv­ileg­ing the Hypothesis

Liv­ing in Many Worlds

Quan­tum Non-Realism

If Many-Wor­lds Had Come First

Where Philos­o­phy Meets Science

Thou Art Physics

Many Wor­lds, One Best Guess

Science and Rationality

The final sequence in this book tie these ideas together, and draws some conclusions on the strength of our scientific institutions.

The Failures of Eld Science

The Dilemma: Science or Bayes?

Science Doesn’t Trust Your Rationality

When Science Can’t Help

Science Isn’t Strict Enough

Do Scien­tists Already Know This Stuff?

No Safe Defense, Not Even Science

Chang­ing the Defi­ni­tion of Science

Faster Than Science

Ein­stein’s Speed

That Alien Message

My Child­hood Role Model

Ein­stein’s Superpowers

Class Project

1. Interlude

A Tech­ni­cal Ex­pla­na­tion of Tech­ni­cal Explanation

Mere Goodness

What makes something valuable—morally, or aesthetically, or prudentially? These three sequences ask how we can justify, revise, and naturalize our values and desires. The aim will be to find a way to understand our goals without compromising our efforts to actually achieve them. Here the biggest challenge is knowing when to trust your messy, complicated case-by-case impulses about what’s right and wrong, and when to replace them with simple exceptionless principles.

Fake Preferences

On failed attempts at theories of value.

Ends: An Introduction

Not for the Sake of Hap­piness (Alone)

Fake Selfishness

Fake Morality

Fake Utility Functions

De­tached Lever Fallacy

Dreams of AI Design

The De­sign Space of Minds-In-General

Value Theory

On obstacles to developing a new theory, and some intuitively desirable features of such a theory.

Where Re­cur­sive Jus­tifi­ca­tion Hits Bottom

My Kind of Reflection

No Univer­sally Com­pel­ling Arguments

Created Already In Motion

Sort­ing Peb­bles Into Cor­rect Heaps

2-Place and 1-Place Words

What Would You Do Without Mo­ral­ity?

Chang­ing Your Metaethics

Could Any­thing Be Right?

Mo­ral­ity as Fixed Computation

Mag­i­cal Categories

The True Pri­soner’s Dilemma

Sym­pa­thetic Minds

High Challenge

Se­ri­ous Stories

Value is Fragile

The Gift We Give To Tomorrow

Quan­tified Humanism

On the tricky question of how we should apply such theories to our ordinary moral intuitions and decision-making.

One Life Against the World

The Allais Paradox

Zut Allais!

Feel­ing Moral

The “In­tu­itions” Be­hind “Utili­tar­i­anism”

Ends Don’t Jus­tify Means (Among Hu­mans)

Eth­i­cal Injunctions

Some­thing to Protect

When (Not) To Use Probabilities

New­comb’s Prob­lem and Re­gret of Rationality

1. Interlude

Twelve Virtues of Rationality

Be­com­ing Stronger

How can individuals and communities put all this into practice? These three sequences begin with an autobiographical account of Yudkowsky’s own biggest philosophical blunders, with advice on how he thinks others might do better. The book closes with recommendations for developing evidence-based applied rationality curricula, and for forming groups and institutions to support interested students, educators, researchers, and friends.

Yud­kowsky’s Com­ing of Age

This sequence provides a last in-depth illustration of the dynamics of irrational belief, this time spotlighting the author’s own intellectual history.

Begin­nings: An Introduction

My Child­hood Death Spiral

My Best and Worst Mistake

Raised in Technophilia

A Prodigy of Refutation

The Sheer Folly of Cal­low Youth

That Tiny Note of Discord

Fight­ing a Rear­guard Ac­tion Against the Truth

My Nat­u­ral­is­tic Awakening

The Level Above Mine

The Mag­ni­tude of His Own Folly

Beyond the Reach of God

My Bayesian Enlightenment

Challeng­ing the Difficult

This sequences asks what it takes to solve a truly difficult problem—including demands that go beyond epistemic rationality.

Try­ing to Try

Use the Try Harder, Luke

On Do­ing the Impossible

Make an Ex­traor­di­nary Effort

Shut up and do the im­pos­si­ble!

Fi­nal Words

The Craft and the Community

Discusses rationality groups and group rationality, raising the questions:

  • Can rationality be learned and taught?

  • If so, how much improvement is possible?

How can we be confident we’re seeing a real effect in a rationality intervention, and picking out the right cause?

  • What community norms would make this process of bettering ourselves easier?

  • Can we effectively collaborate on large-scale problems without sacrificing our freedom of thought and conduct?

Above all: What’s missing? What should be in the next generation of rationality primers—the ones that replace this text, improve on its style, test its prescriptions, supplement its content, and branch out in altogether new directions?

Rais­ing the San­ity Waterline

A Sense That More Is Possible

Epistemic Viciousness

Schools Pro­lifer­at­ing Without Evidence

3 Levels of Ra­tion­al­ity Verification

Why Our Kind Can’t Cooperate

Tol­er­ate Tolerance

Your Price for Joining

Can Hu­man­ism Match Reli­gion’s Out­put?

Church vs. Taskforce

Ra­tion­al­ity: Com­mon In­ter­est of Many Causes

Hel­pless Individuals

Money: The Unit of Caring

Pur­chase Fuzzies and Utilons Separately

By­s­tan­der Apathy

Col­lec­tive Apa­thy and the Internet

In­cre­men­tal Progress and the Valley

Bayesi­ans vs. Barbarians

Be­ware of Other-Optimizing

Prac­ti­cal Ad­vice Backed By Deep Theories

The Sin of Underconfidence

Go Forth and Create the Art!