Less Wrong/​2009 Ar­ti­cles/​Summaries

WikiLast edit: 10 May 2017 14:39 UTC by Deku-shrub
Free to Optimize

It may be better to create a world that operates by better rules, that you can understand, so that you can optimize your own future, than to create a world that includes some sort of deity that can be prayed to. The human reluctance to have their future controlled by an outside source is a nontrivial part of morality.

The Uses of Fun (Theory)

Fun Theory is important for replying to critics of human progress; for inspiring people to keep working on human progress; for refuting religious arguments that the world could possibly have been benevolently designed; for showing that religious Heavens show the signature of the same human biases that torpedo other attempts at Utopia; and for appreciating the great complexity of our values and of a life worth living, which requires a correspondingly strong effort of AI design to create AIs that can play good roles in a good future.

Growing Up is Hard

Each part of the human brain is optimized for behaving correctly, assuming that the rest of the brain is operating exactly as expected. Change one part, and the rest of your brain may not work as well. Increasing a human’s intelligence is not a trivial problem.

Changing Emotions

Creating new emotions seems like a desirable aspect of many parts of Fun Theory, but this is not to be trivially postulated. It’s the sort of thing best done with superintelligent help, and slowly and conservatively even then. We can illustrate these difficulties by trying to translate the short English phrase “change sex” into a cognitive transformation of extraordinary complexity and many hidden subproblems.

Rationality Quotes 21
Emotional Involvement

Since the events in video games have no actual long-term consequences, playing a video game is not likely to be nearly as emotionally involving as much less dramatic events in real life. The supposed Utopia of playing lots of cool video games forever, is life as a series of disconnected episodes with no lasting consequences. Our current emotions are bound to activities that were subgoals of reproduction in the ancestral environment—but we now pursue these activities as independent goals regardless of whether they lead to reproduction.

Rationality Quotes 22
Serious Stories

Stories and lives are optimized according to rather different criteria. Advice on how to write fiction will tell you that “stories are about people’s pain” and “every scene must end in disaster”. I once assumed that it was not possible to write any story about a successful Singularity because the inhabitants would not be in any pain; but something about the final conclusion that the post-Singularity world would contain no stories worth telling seemed alarming. Stories in which nothing ever goes wrong, are painful to read; would a life of endless success have the same painful quality? If so, should we simply eliminate that revulsion via neural rewiring? Pleasure probably does retain its meaning in the absence of pain to contrast it; they are different neural systems. The present world has an imbalance between pain and pleasure; it is much easier to produce severe pain than correspondingly intense pleasure. One path would be to address the imbalance and create a world with more pleasures, and free of the more grindingly destructive and pointless sorts of pain. Another approach would be to eliminate pain entirely. I feel like I prefer the former approach, but I don’t know if it can last in the long run.

Rationality Quotes 23
Continuous Improvement

Humans seem to be on a hedonic treadmill; over time, we adjust to any improvements in our environment—after a month, the new sports car no longer seems quite as wonderful. This aspect of our evolved psychology is not surprising: it is a rare organism in a rare environment whose optimal reproductive strategy is to rest with a smile on its face, feeling happy with what it already has. To entirely delete the hedonic treadmill seems perilously close to tampering with Boredom itself. Is there enough fun in the universe for a transhuman to jog off the treadmill—improve their life continuously, leaping to ever-higher hedonic levels before adjusting to the previous one? Can ever-higher levels of pleasure be created by the simple increase of ever-larger floating-point numbers in a digital pleasure center, or would that fail to have the full subjective quality of happiness? If we continue to bind our pleasures to novel challenges, can we find higher levels of pleasure fast enough, without cheating? The rate at which value can increase as more bits are added, and the rate at which value must increase for eudaimonia, together determine the lifespan of a mind. If minds must use exponentially more resources over time in order to lead a eudaimonic existence, their subjective lifespan is measured in mere millennia even if they can draw on galaxy-sized resources.

Eutopia is Scary

If a citizen of the Past were dropped into the Present world, they would be pleasantly surprised along at least some dimensions; they would also be horrified, disgusted, and frightened. This is not because our world has gone wrong, but because it has gone right. A true Future gone right would, realistically, be shocking to us along at least some dimensions. This may help explain why most literary Utopias fail; as George Orwell observed, “they are chiefly concerned with avoiding fuss”. Heavens are meant to sound like good news; political utopias are meant to show how neatly their underlying ideas work. Utopia is reassuring, unsurprising, and dull. Eutopia would be scary. (Of course the vast majority of scary things are not Eutopian, just entropic.) Try to imagine a genuinely better world in which you would be out of placenot a world that would make you smugly satisfied at how well all your current ideas had worked. This proved to be a very important exercise when I tried it; it made me realize that all my old proposals had been optimized to sound safe and reassuring.

Building Weirdtopia

Utopia and Dystopia both confirm the moral sensibilities you started with; whether the world is a libertarian utopia of government non-interference, or a hellish dystopia of government intrusion and regulation, either way you get to say “Guess I was right all along.” To break out of this mold, write down the Utopia, and the Dystopia, and then try to write down the Weirdtopia—an arguably-better world that zogs instead of zigging or zagging. (Judging from the comments, this exercise seems to have mostly failed.)

She has joined the Conspiracy
Justified Expectation of Pleasant Surprises

A pleasant surprise probably has a greater hedonic impact than being told about the same positive event long in advance—hearing about the positive event is good news in the moment of first hearing, but you don’t have the gift actually in hand. Then you have to wait, perhaps for a long time, possibly comparing the expected pleasure of the future to the lesser pleasure of the present. This argues that if you have a choice between a world in which the same pleasant events occur, but in the first world you are told about them long in advance, and in the second world they are kept secret until they occur, you would prefer to live in the second world. The importance of hope is widely appreciated—people who do not expect their lives to improve in the future are less likely to be happy in the present—but the importance of vague hope may be understated.

Seduced by Imagination

Vagueness usually has a poor name in rationality, but the Future is something about which, in fact, we do not possess strong reliable specific information. Vague (but justified!) hopes may also be hedonically better. But a more important caution for today’s world is that highly specific pleasant scenarios can exert a dangerous power over human minds—suck out our emotional energy, make us forget what we don’t know, and cause our mere actual lives to pale by comparison. (This post is not about Fun Theory proper, but it contains an important warning about how not to use Fun Theory.)

Getting Nearer

How should rationalists use their near and far modes of thinking? And how should knowing about near versus far modes influence how we present the things we believe to other people?

In Praise of Boredom

“Boredom” is an immensely subtle and important aspect of human values, nowhere near as straightforward as it sounds to a human. We don’t want to get bored with breathing or with thinking. We do want to get bored with playing the same level of the same video game over and over. We don’t want changing the shade of the pixels in the game to make it stop counting as “the same game”. We want a steady stream of novelty, rather than spending most of our time playing the best video game level so far discovered (over and over) and occasionally trying out a different video game level as a new candidate for “best”. These considerations would not arise in most utility functions in expected utility maximizers.

Sympathetic Minds

Mirror neurons are neurons that fire both when performing an action oneself, and watching someone else perform the same action—for example, a neuron that fires when you raise your hand or watch someone else raise theirs. We predictively model other minds by putting ourselves in their shoes, which is empathy. But some of our desire to help relatives and friends, or be concerned with the feelings of allies, is expressed as sympathy, feeling what (we believe) they feel. Like “boredom”, the human form of sympathy would not be expected to arise in an arbitrary expected-utility-maximizing AI. Most such agents would regard any agents in its environment as a special case of complex systems to be modeled or optimized; it would not feel what they feel.

Interpersonal Entanglement

Our sympathy with other minds makes our interpersonal relationships one of the most complex aspects of human existence. Romance, in particular, is more complicated than being nice to friends and kin, negotiating with allies, or outsmarting enemies—it contains aspects of all three. Replacing human romance with anything simpler or easier would decrease the peak complexity of the human species—a major step in the wrong direction, it seems to me. This is my problem with proposals to give people perfect, nonsentient sexual/​romantic partners, which I usually refer to as “catgirls” (“catboys”). The human species does have a statistical sex problem: evolution has not optimized the average man to make the average woman happy or vice versa. But there are less sad ways to solve this problem than both genders giving up on each other and retreating to catgirls/​catboys.

Failed Utopia #4-2

A fictional short story illustrating some of the ideas in Interpersonal Entanglement above. (Many commenters seemed to like this story, and some said that the ideas were easier to understand in this form.)

Investing for the Long Slump

What should you do if you think that the world’s economy is going to stay bad for a very long time? How could such a scenario happen?

Higher Purpose

Having a Purpose in Life consistently shows up as something that increases stated well-being. Of course, the problem with trying to pick out “a Purpose in Life” in order to make yourself happier, is that this doesn’t take you outside yourself; it’s still all about you. To find purpose, you need to turn your eyes outward to look at the world and find things there that you care about—rather than obsessing about the wonderful spiritual benefits you’re getting from helping others. In today’s world, most of the highest-priority legitimate Causes consist of large groups of people in extreme jeopardy: Aging threatens the old, starvation threatens the poor, extinction risks threaten humanity as a whole. If the future goes right, many and perhaps all such problems will be solved—depleting the stream of victims to be helped. Will the future therefore consist of self-obsessed individuals, with nothing to take them outside themselves? I suggest, though, that even if there were no large groups of people in extreme jeopardy, we would still, looking around, find things outside ourselves that we cared about—friends, family; truth, freedom… Nonetheless, if the Future goes sufficiently well, there will come a time when you could search the whole of civilization, and never find a single person so much in need of help, as dozens you now pass on the street. If you do want to save someone from death, or help a great many people, then act now; your opportunity may not last, one way or another.

Rationality Quotes 24
The Fun Theory Sequence

Describes some of the many complex considerations that determine what sort of happiness we most prefer to have—given that many of us would decline to just have an electrode planted in our pleasure centers.

BHTV: Yudkowsky /​ Wilkinson
31 Laws of Fun

A brief summary of principles for writing fiction set in a eutopia.

OB Status Update
Rationality Quotes 25
Value is Fragile

An interesting universe, that would be incomprehensible to the universe today, is what the future looks like if things go right. There are a lot of things that humans value that if you did everything else right, when building an AI, but left out that one thing, the future would wind up looking dull, flat, pointless, or empty. Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth.

Three Worlds Collide (0/​8)
The Baby-Eating Aliens (1/​8)

Future explorers discover an alien civilization, and learns something unpleasant about their civilization.

War and/​or Peace (2/​8)

The true prisoner’s dilemma against aliens. The conference struggles to decide the appropriate course of action.

The Super Happy People (3/​8)

Humanity encounters new aliens that see the existence of pain amongst humans as morally unacceptable.

Interlude with the Confessor (4/​8)

Akon talks things over with the Confessor, and receives a history lesson.

Three Worlds Decide (5/​8)

The Superhappies propose a compromise.

Normal Ending: Last Tears (6/​8)

Humanity accepts the Superhappies’ bargain.

True Ending: Sacrificial Fire (7/​8)

The Impossible Possible World tries to save humanity.

Epilogue: Atonement (8/​8)

The last moments aboard the Impossible Possible World.

The Thing That I Protect

The cause that drives Yudkowsky isn’t Friendly AI, and it isn’t even specifically about preserving human values. It’s simply about a future that’s a lot better than the present.

...And Say No More Of It

In the previous couple of months, Overcoming Bias had focused too much on singularity related issues and not enough on rationality. A two month moratorium on the topic of the singularity/​intelligence explosion is imposed.

(Moral) Truth in Fiction?

It is possible to convey moral ideas in a clearer way through fiction than through abstract argument. Stories may also help us get closer to thinking about moral issues in near mode. Don’t discount moral arguments just because they’re written as fiction.

Informers and Persuaders

A purely hypothetical scenario about a world containing some authors trying to persuade people of a particular theory, and some authors simply trying to share valuable information.

Cynicism in Ev-Psych (and Econ?)

Evolutionary Psychology and Microeconomics seem to develop different types of cynical theories, and are cynical about different things.

The Evolutionary-Cognitive Boundary

It’s worth drawing a sharp boundary between ideas about evolutionary reasons for behavior, and cognitive reasons for behavior.

An Especially Elegant Evpsych Experiment

An experiment comparing expected parental grief at the death of a child at different ages, to the reproductive success rate of children at that age in a hunter gatherer tribe.

Rationality Quotes 26
An African Folktale

A story that seems to point to some major cultural differences.

Cynical About Cynicism

Much of cynicism seems to be about signaling sophistication, rather than sharing uncommon, true, and important insights.

Good Idealistic Books are Rare

Much of our culture is the official view, not the idealistic view.

Against Maturity

Dividing the world up into “childish” and “mature” is not a useful way to think.

Pretending to be Wise

Trying to signal wisdom or maturity by taking a neutral position is very seldom the right course of action.

Wise Pretensions v.0

An earlier post, on the same topic as yesterday’s post.

Rationality Quotes 27
Fairness vs. Goodness

An experiment in which two unprepared subjects play an asymmetric version of the Prisoner’s Dilemma. Is the best outcome the one where each player gets as many points as possible, or the one in which each player gets about the same number of points?

On Not Having an Advance Abyssal Plan

Don’t say that you’ll figure out a solution to the worst case scenario if the worst case scenario happens. Plan it out in advance.

About Less Wrong
Formative Youth

People underestimate the extent to which their own beliefs and attitudes are influenced by their experiences as a child.

Tell Your Rationalist Origin Story
Markets are Anti-Inductive

The standard theory of efficient markets says that exploitable regularities in the past, shouldn’t be exploitable in the future. If everybody knows that “stocks have always gone up”, then there’s no reason to sell them.

Issues, Bugs, and Requested Features
The Most Important Thing You Learned
The Most Frequently Useful Thing
That You’d Tell All Your Friends
Test Your Rationality

You should try hard and often to test your rationality, but how can you do that?

Unteachable Excellence

If it were possible to teach people reliably how to become exceptional, then it would no longer be exceptional.

The Costs of Rationality
Teaching the Unteachable

There are many things we do that we can’t easily understand how we do them. Teaching them is therefore a challenge.

No, Really, I’ve Deceived Myself

Some people who have fallen into self-deception haven’t actually deceived themselves. Some of them simply believe that they have deceived themselves, but have not actually done this.

The ethic of hand-washing and community epistemic practice
Belief in Self-Deception

Deceiving yourself is harder than it seems. What looks like a successively adopted false belief may actually be just a belief in false belief.

Rationality and Positive Psychology
Posting now enabled
Kinnaird’s truels
Information cascades
Is it rational to take psilocybin?
Does blind review slow down science?
Formalization is a rationality technique
Slow down a little… maybe?
Checklists
The Golem
Simultaneously Right and Wrong
Moore’s Paradox

People often mistake reasons for endorsing a proposition for reasons to believe that proposition.

It’s the Same Five Dollars!
Lies and Secrets
The Mystery of the Haunted Rationalist
Don’t Believe You’ll Self-Deceive

It may be wise to tell yourself that you will not be able to successfully deceive yourself, because by telling yourself this, you may make it true.

The Wrath of Kahneman
The Mistake Script
LessWrong anti-kibitzer (hides comment authors and vote counts)
You May Already Be A Sinner
Striving to Accept

Trying extra hard to believe something seems like Dark Side Epistemology, but what about trying extra hard to accept something that you know is true.

Software tools for community truth-seeking
Wanted: Python open source volunteers
Selective processes bring tag-alongs (but not always!)
Adversarial System Hats
Beginning at the Beginning
The Apologist and the Revolutionary
Raising the Sanity Waterline

Behind every particular failure of social rationality is a larger and more general failure of social rationality; even if all religious content were deleted tomorrow from all human minds, the larger failures that permit religion would still be present. Religion may serve the function of an asphyxiated canary in a coal mine—getting rid of the canary doesn’t get rid of the gas. Even a complete social victory for atheism would only be the beginning of the real work of rationalists. What could you teach people without ever explicitly mentioning religion, that would raise their general epistemic waterline to the point that religion went underwater?

So you say you’re an altruist...
A Sense That More Is Possible

The art of human rationality may have not been much developed because its practitioners lack a sense that vastly more is possible. The level of expertise that most rationalists strive to develop is not on a par with the skills of a professional mathematician—more like that of a strong casual amateur. Self-proclaimed “rationalists” don’t seem to get huge amounts of personal mileage out of their craft, and no one sees a problem with this. Yet rationalists get less systematic training in a less systematic context than a first-dan black belt gets in hitting people.

Talking Snakes: A Cautionary Tale
Boxxy and Reagan
Dialectical Bootstrapping
Is Santa Real?
Epistemic Viciousness

An essay by Gillian Russell on “Epistemic Viciousness in the Martial Arts” generalizes amazingly to possible and actual problems with building a community around rationality. Most notably the extreme dangers associated with “data poverty”—the difficulty of testing the skills in the real world. But also such factors as the sacredness of the dojo, the investment in teachings long-practiced, the difficulty of book learning that leads into the need to trust a teacher, deference to historical masters, and above all, living in data poverty while continuing to act as if the luxury of trust is possible.

On the Care and Feeding of Young Rationalists
The Least Convenient Possible World
Closet survey #1
Soulless morality
The Skeptic’s Trilemma
Schools Proliferating Without Evidence

The branching schools of “psychotherapy”, another domain in which experimental verification was weak (nonexistent, actually), show that an aspiring craft lives or dies by the degree to which it can be tested in the real world. In the absence of that testing, one becomes prestigious by inventing yet another school and having students, rather than excelling at any visible performance criterion. The field of hedonic psychology (happiness studies) began, to some extent, with the realization that you could measure happiness—that there was a family of measures that by golly did validate well against each other. The act of creating a new measurement creates new science; if it’s a good measurement, you get good science.

Really Extreme Altruism
Storm by Tim Minchin
3 Levels of Rationality Verification

How far the craft of rationality can be taken, depends largely on what methods can be invented for verifying it. Tests seem usefully stratifiable into reputational, experimental, and organizational. A “reputational” test is some real-world problem that tests the ability of a teacher or a school (like running a hedge fund, say) - “keeping it real”, but without being able to break down exactly what was responsible for success. An “experimental” test is one that can be run on each of a hundred students (such as a well-validated survey). An “organizational” test is one that can be used to preserve the integrity of organizations by validating individuals or small groups, even in the face of strong incentives to game the test. The strength of solution invented at each level will determine how far the craft of rationality can go in the real world.

The Tragedy of the Anticommons
Are You a Solar Deity?
In What Ways Have You Become Stronger?

Brainstorming verification tests, asking along what dimensions you think you’ve improved due to “rationality”.

Taboo “rationality,” please.
Science vs. art
What Do We Mean By “Rationality”?

When we talk about rationality, we’re generally talking about either epistemic rationality (systematic methods of finding out the truth) or instrumental rationality (systematic methods of making the world more like we would like it to be). We can discuss these in the forms of probability theory and decision theory, but this doesn’t fully cover the difficulty of being rational as a human. There is a lot more to rationality than just the formal theories.

Comments for “Rationality”
The “Spot the Fakes” Test
On Juvenile Fiction
Rational Me or We?
Dead Aid
Tarski Statements as Rationalist Exercise
The Pascal’s Wager Fallacy Fallacy

People hear about a gamble involving a big payoff, and dismiss it as a form of Pascal’s Wager. But the size of the payoff is not the flaw in Pascal’s Wager. Just because an option has a very large potential payoff does not mean that the probability of getting that payoff is small, or that there are other possibilities that will cancel with it.

Never Leave Your Room
Rationalist Storybooks: A Challenge
A corpus of our community’s knowledge
Little Johny Bayesian
How to Not Lose an Argument
Counterfactual Mugging
Rationalist Fiction

What works of fiction are out there that show characters who have acquired their skills at rationality through practice, and who we can watch in the act of employing those skills?

Rationalist Poetry Fans, Unite!
Precommitting to paying Omega.
Why Our Kind Can’t Cooperate

The atheist/​libertarian/​technophile/​sf-fan/​early-adopter/​programmer/​etc crowd, aka “the nonconformist cluster”, seems to be stunningly bad at coordinating group projects. There are a number of reasons for this, but one of them is that people are as reluctant to speak agreement out loud, as they are eager to voice disagreements—the exact opposite of the situation that obtains in more cohesive and powerful communities. This is not rational either! It is dangerous to be half a rationalist (in general), and this also applies to teaching only disagreement but not agreement, or only lonely defiance but not coordination. The pseudo-rationalist taboo against expressing strong feelings probably doesn’t help either.

Just a reminder: Scientists are, technically, people.
Support That Sounds Like Dissent
Tolerate Tolerance

One of the likely characteristics of someone who sets out to be a “rationalist” is a lower-than-usual tolerance for flawed thinking. This makes it very important to tolerate other people’s tolerance—to avoid rejecting them because they tolerate people you wouldn’t—since otherwise we must all have exactly the same standards of tolerance in order to work together, which is unlikely. Even if someone has a nice word to say about complete lunatics and crackpots—so long as they don’t literally believe the same ideas themselves—try to be nice to them? Intolerance of tolerance corresponds to punishment of non-punishers, a very dangerous game-theoretic idiom that can lock completely arbitrary systems in place even when they benefit no one at all.

Mind Control and Me
Individual Rationality Is a Matter of Life and Death
The Power of Positivist Thinking
Don’t Revere The Bearer Of Good Info
You’re Calling *Who* A Cult Leader?

Paul Graham gets exactly the same accusations about “cults” and “echo chambers” and “coteries” that I do, in exactly the same tone—e.g. comparing the long hours worked by Y Combinator startup founders to the sleep-deprivation tactic used in cults, or claiming that founders were asked to move to the Bay Area startup hub as a cult tactic of separation from friends and family. This is bizarre, considering our relative surface risk factors. It just seems to be a failure mode of the nonconformist community in general. By far the most cultish-looking behavior on Hacker News is people trying to show off how willing they are to disagree with Paul Graham, which, I can personally testify, feels really bizarre when you’re the target. Admiring someone shouldn’t be so scary—I don’t hold back so much when praising e.g. Douglas Hofstadter; in this world there are people who have pulled off awesome feats and it is okay to admire them highly.

Cached Selves
Eliezer Yudkowsky Facts
When Truth Isn’t Enough
BHTV: Yudkowsky & Adam Frank on “religious experience”
I’m confused. Could someone help?
Playing Video Games In Shuffle Mode
Book: Psychiatry and the Human Condition
Thoughts on status signals
Bogus Pipeline, Bona Fide Pipeline
On Things that are Awesome

Seven thoughts: I can list more than one thing that is awesome; when I think of “Douglas Hofstadter” I am really thinking of his all-time greatest work; the greatest work is not the person; when we imagine other people we are imagining their output, so the real Douglas Hofstadter is the source of “Douglas Hofstadter”; I most strongly get the sensation of awesomeness when I see someone outdoing me overwhelmingly, at some task I’ve actually tried; we tend to admire unique detailed awesome things and overlook common nondetailed awesome things; religion and its bastard child “spirituality” tends to make us overlook human awesomeness.

Hyakujo’s Fox
Terrorism is not about Terror
The Implicit Association Test
Contests vs. Real World Problems
The Sacred Mundane

There are a lot of bad habits of thought that have developed to defend religious and spiritual experience. They aren’t worth saving, even if we discard the original lie. Let’s just admit that we were wrong, and enjoy the universe that’s actually here.

Extreme updating: The devil is in the missing details
Spock’s Dirty Little Secret
The Good Bayesian
Fight Biases, or Route Around Them?
Why *I* fail to act rationally
Open Thread: March 2009
Two Blegs
Your Price for Joining

The game-theoretical puzzle of the Ultimatum game has its reflection in a real-world dilemma: How much do you demand that an existing group adjust toward you, before you will adjust toward it? Our hunter-gatherer instincts will be tuned to groups of 40 with very minimal administrative demands and equal participation, meaning that we underestimate the inertia of larger and more specialized groups and demand too much before joining them. In other groups this resistance can be overcome by affective death spirals and conformity, but rationalists think themselves too good for this—with the result that people in the nonconformist cluster often set their joining prices way way way too high, like an 50-way split with each player demanding 20% of the money. Nonconformists need to move in the direction of joining groups more easily, even in the face of annoyances and apparent unresponsiveness. If an issue isn’t worth personally fixing by however much effort it takes, it’s not worth a refusal to contribute.

Sleeping Beauty gets counterfactually mugged
The Mind Is Not Designed For Thinking
Crowley on Religious Experience
Can Humanism Match Religion’s Output?

Anyone with a simple and obvious charitable project—responding with food and shelter to a tidal wave in Thailand, say—would be better off by far pleading with the Pope to mobilize the Catholics, rather than with Richard Dawkins to mobilize the atheists. For so long as this is true, any increase in atheism at the expense of Catholicism will be something of a hollow victory, regardless of all other benefits. Can no rationalist match the motivation that comes from the irrational fear of Hell? Or does the real story have more to do with the motivating power of physically meeting others who share your cause, and group norms of participating?

On Seeking a Shortening of the Way
Altruist Coordination—Central Station
Less Wrong Facebook Page
The Hidden Origins of Ideas
Defense Against The Dark Arts: Case Study #1
Church vs. Taskforce

Churches serve a role of providing community—but they aren’t explicitly optimized for this, because their nominal role is different. If we desire community without church, can we go one better in the course of deleting religion? There’s a great deal of work to be done in the world; rationalist communities might potentially organize themselves around good causes, while explicitly optimizing for community.

When It’s Not Right to be Rational
The Zombie Preacher of Somerset
Hygienic Anecdotes
Rationality: Common Interest of Many Causes

Many causes benefit particularly from the spread of rationality—because it takes a little more rationality than usual to see their case, as a supporter, or even just a supportive bystander. Not just the obvious causes like atheism, but things like marijuana legalization. In the case of my own work this effect was strong enough that after years of bogging down I threw up my hands and explicitly recursed on creating rationalists. If such causes can come to terms with not individually capturing all the rationalists they create, then they can mutually benefit from mutual effort on creating rationalists. This cooperation may require learning to shut up about disagreements between such causes, and not fight over priorities, except in specialized venues clearly marked.

Ask LW: What questions to test in our rationality questionnaire?

Requesting suggestions for an actual survey to be run.

Bay area OB/​LW meetup, today, Sunday, March 29, at 5pm
Akrasia, hyperbolic discounting, and picoeconomics
Deliberate and spontaneous creativity
Most Rationalists Are Elsewhere
Framing Effects in Anthropology
Kling, Probability, and Economics
Helpless Individuals

When you consider that our grouping instincts are optimized for 50-person hunter-gatherer bands where everyone knows everyone else, it begins to seem miraculous that modern-day large institutions survive at all. And in fact, the vast majority of large modern-day institutions simply fail to exist in the first place. This is why funding of Science is largely through money thrown at Science rather than donations from individuals—research isn’t a good emotional fit for the rare problems that individuals can manage to coordinate on. In fact very few things are, which is why e.g. 200 million adult Americans have such tremendous trouble supervising the 535 members of Congress. Modern humanity manages to put forth very little in the way of coordinated individual effort to serve our collective individual interests.

The Benefits of Rationality?
Money: The Unit of Caring

Omohundro’s resource balance principle implies that the inside of any approximately rational system has a common currency of expected utilons. In our world, this common currency is called “money” and it is the unit of how much society cares about something—a brutal yet obvious point. Many people, seeing a good cause, would prefer to help it by donating a few volunteer hours. But this avoids the tremendous gains of comparative advantage, professional specialization, and economies of scale—the reason we’re not still in caves, the only way anything ever gets done in this world, the tools grownups use when anyone really cares. Donating hours worked within a professional specialty and paying-customer priority, whether directly, or by donating the money earned to hire other professional specialists, is far more effective than volunteering unskilled hours.

Building Communities vs. Being Rational
Degrees of Radical Honesty
Introducing CADIE
Purchase Fuzzies and Utilons Separately

Wealthy philanthropists typically make the mistake of trying to purchase warm fuzzy feelings, status among friends, and actual utilitarian gains, simultaneously; this results in vague pushes along all three dimensions and a mediocre final result. It should be far more effective to spend some money/​effort on buying altruistic fuzzies at maximum optimized efficiency (e.g. by helping people in person and seeing the results in person), buying status at maximum efficiency (e.g. by donating to something sexy that you can brag about, regardless of effectiveness), and spending most of your money on expected utilons (chosen through sheer cold-blooded shut-up-and-multiply calculation, without worrying about status or fuzzies).

Proverbs and Cached Judgments: the Rolling Stone
You don’t need Kant
Accuracy Versus Winning
Wrong Tomorrow
Selecting Rationalist Groups

Trying to breed e.g. egg-laying chickens by individual selection can produce odd side effects on the farm level, since a more dominant hen can produce more egg mass at the expense of other hens. Group selection is nearly impossible in Nature, but easy to impose in the laboratory, and group-selecting hens produced substantial increases in efficiency. Though most of my essays are about individual rationality—and indeed, Traditional Rationality also praises the lone heretic more than evil Authority—the real effectiveness of “rationalists” may end up determined by their performance in groups.

Aumann voting; or, How to vote when you’re ignorant
“Robot scientists can think for themselves”
Where are we?
The Brooklyn Society For Ethical Culture
Open Thread: April 2009
Rationality is Systematized Winning

The idea behind the statement “Rationalists should win” is not that rationality will make you invincible. It means that if someone who isn’t behaving according to your idea of rationality is outcompeting you, predictably and consistently, you should consider that you’re not the one being rational.

Another Call to End Aid to Africa
First London Rationalist Meeting upcoming
On dollars, utility, and crack cocaine
Incremental Progress and the Valley

The optimality theorems for probability theory and decision theory, are for perfect probability theory and decision theory. There is no theorem that incremental changes toward the ideal, starting from a flawed initial form, must yield incremental progress at each step along the way. Since perfection is unattainable, why dare to try for improvement? But my limited experience with specialized applications suggests that given enough progress, one can achieve huge improvements over baseline—it just takes a lot of progress to get there.

The First London Rationalist Meetup
Why Support the Underdog?
Off-Topic Discussion Thread: April 2009
Voting etiquette
Formalizing Newcomb’s
Supporting the underdog is explained by Hanson’s Near/​Far distinction
Real-Life Anthropic Weirdness

Extremely rare events can create bizarre circumstances in which people may not be able to effectively communicate about improbability.

Rationalist Wiki
Rationality Toughness Tests
Heuristic is not a bad word
Rationalists should beware rationalism
Newcomb’s Problem standard positions
Average utilitarianism must be correct?
Rationalist wiki, redux
What do fellow rationalists think about Mensa?
Extenuating Circumstances

You can excuse other people’s shortcomings on the basis of extenuating circumstances, but you shouldn’t do that with yourself.

On Comments, Voting, and Karma—Part I
Newcomb’s Problem vs. One-Shot Prisoner’s Dilemma
What isn’t the wiki for?
Eternal Sunshine of the Rational Mind
Of Lies and Black Swan Blowups
Whining-Based Communities

Many communities feed emotional needs by offering their members someone or something to blame for failure—say, those looters who don’t approve of your excellence. You can easily imagine some group of “rationalists” congratulating themselves on how reasonable they were, while blaming the surrounding unreasonable society for keeping them down. But this is not how real rationality works—there’s no assumption that other agents are rational. We all face unfair tests (and yes, they are unfair to different degrees for different people); and how well you do with your unfair tests, is the test of your existence. Rationality is there to help you win anyway, not to provide a self-handicapping excuse for losing. There are no first-person extenuating circumstances. There is absolutely no point in going down the road of mutual bitterness and consolation, about anything, ever.

Help, help, I’m being oppressed!
Zero-based karma coming through
E-Prime
Mandatory Secret Identities

This post was not well-received, but the point was to suggest that a student must at some point leave the dojo and test their skills in the real world. The aspiration of an excellent student should not consist primarily of founding their own dojo and having their own students.

Rationality, Cryonics and Pascal’s Wager
Less Wrong IRC Meetup
“Stuck In The Middle With Bruce”
Extreme Rationality: It’s Not That Great
“Playing to Win”

The term “playing to win” comes from Sirlin’s book and can be described as using every means necessary to win as long as those means are legal within the structure of the game being played.

Secret Identities vs. Groupthink
Silver Chairs, Paternalism, and Akrasia
Extreme Rationality: It Could Be Great
The uniquely awful example of theism
Beware of Other-Optimizing

Aspiring rationalists often vastly overestimate their own ability to optimize other people’s lives. They read nineteen webpages offering productivity advice that doesn’t work for them… and then encounter the twentieth page, or invent a new method themselves, and wow, it really works—they’ve discovered the true method. Actually, they’ve just discovered the one method in twenty that works for them, and their confident advice is no better than randomly selecting one of the twenty blog posts. Other-Optimizing is exceptionally dangerous when you have power over the other person—for then you’ll just believe that they aren’t trying hard enough.

How theism works
That Crisis thing seems pretty useful
Spay or Neuter Your Irrationalities
The Unfinished Mystery of the Shangri-La Diet

An intriguing dietary theory which appears to allow some people to lose substantial amounts of weight, but doesn’t appear to work at all for others.

Akrasia and Shangri-La

The Shangri-La diet works amazingly well for some people, but completely fails for others, for no known reason. Since the diet has a metabolic rationale and is not supposed to require willpower, its failure in my and other cases is unambigiously mysterious. If it required a component of willpower, then I and others might be tempted to blame myself for not having willpower. The art of combating akrasia (willpower failure) has the same sort of mysteries and is in the same primitive state; we don’t know the deeper rule that explains why a trick works for one person but not another.

Maybe Theism Is OK
Metauncertainty
Is masochism necessary?
Missed Distinctions
Toxic Truth
Too much feedback can be a bad thing
Twelve Virtues booklet printing?
How Much Thought
Awful Austrians
Sunk Cost Fallacy
It’s okay to be (at least a little) irrational
Marketing rationalism
Bystander Apathy

The bystander effect is when groups of people are less likely to take action than an individual. There are a few explanations for why this might be the case.

Persuasiveness vs Soundness
Declare your signaling and hidden agendas
GroupThink, Theism … and the Wiki
Collective Apathy and the Internet

The causes of bystander apathy are even worse on the Internet. There may be an opportunity here for a startup to deliberately try to avert bystander apathy in online group coordination.

Tell it to someone who doesn’t care
Bayesians vs. Barbarians

Suppose that a country of rationalists is attacked by a country of Evil Barbarians who know nothing of probability theory or decision theory. There’s a certain concept of “rationality” which says that the rationalists inevitably lose, because the Barbarians believe in a heavenly afterlife if they die in battle, while the rationalists would all individually prefer to stay out of harm’s way. So the rationalist civilization is doomed; it is too elegant and civilized to fight the savage Barbarians… And then there’s the idea that rationalists should be able to (a) solve group coordination problems, (b) care a lot about other people and (c) win...

Actions and Words: Akrasia and the Fruit of Self-Knowledge
Mechanics without wrenches
I Changed My Mind Today—Canned Laughter
Of Gender and Rationality

Analysis of the gender imbalance that appears in “rationalist” communities, suggesting nine possible causes of the effect, and possible corresponding solutions.

Welcome to Less Wrong!
Instrumental Rationality is a Chimera
Practical rationality questionnaire
My Way

I sometimes think of myself as being like the protagonist in a classic SF labyrinth story, wandering further and further into some alien artifact, trying to radio back a description of what I’m seeing, so that I can be followed. But what I’m finding is not just the Way, the thing that lies at the center of the labyrinth; it is also my Way, the path that I would take to come closer to the center, from whatever place I started out. And yet there is still a common thing we are all trying to find. We should be aware that others’ shortest paths may not be the same as our own, but this is not the same as giving up the ability to judge or to share.

The Art of Critical Decision Making
The Trouble With “Good”
While we’re on the subject of meta-ethics...
Chomsky on reason and science
Anti-rationality quotes
Two-Tier Rationalism
My main problem with utilitarianism
Just for fun—let’s play a game.
Rationality Quotes—April 2009
The Epistemic Prisoner’s Dilemma
How a pathological procrastinor can lose weight (Anti-akrasia)
Atheist or Agnostic?
Great Books of Failure
Weekly Wiki Workshop and suggested articles
The True Epistemic Prisoner’s Dilemma
Spreading the word?
The ideas you’re not ready to post
Evangelical Rationality
The Sin of Underconfidence

When subjects know about a bias or are warned about a bias, overcorrection is not unheard of as an experimental result. That’s what makes a lot of cognitive subtasks so troublesome—you know you’re biased but you’re not sure how much, and if you keep tweaking you may overcorrect. The danger of underconfidence (overcorrecting for overconfidence) is that you pass up opportunities on which you could have been successful; not challenging difficult enough problems; losing forward momentum and adopting defensive postures; refusing to put the hypothesis of your inability to the test; losing enough hope of triumph to try hard enough to win. You should ask yourself “Does this way of thinking make me stronger, or weaker?”

Masochism vs. Self-defeat
Well-Kept Gardens Die By Pacifism

Good online communities die primarily by refusing to defend themselves, and so it has been since the days of Eternal September. Anyone acculturated by academia knows that censorship is a very grave sin… in their walled gardens where it costs thousands and thousands of dollars to enter. A community with internal politics will treat any attempt to impose moderation as a coup attempt (since internal politics seem of far greater import than invading barbarians). In rationalist communities this is probably an instance of underconfidence—mildly competent moderators are probably quite trustworthy to wield the banhammer. On Less Wrong, the community is the moderator (via karma) and you will need to trust yourselves enough to wield the power and keep the garden clear.

UC Santa Barbara Rationalists Unite—Saturday, 6PM
LessWrong Boo Vote (Stochastic Downvoting)
Proposal: Use the Wiki for Concepts
Escaping Your Past
Go Forth and Create the Art!

I’ve developed primarily the art of epistemic rationality, in particular, the arts required for advanced cognitive reductionism… arts like distinguishing fake explanations from real ones and avoiding affective death spirals. There is much else that needs developing to create a craft of rationality—fighting akrasia; coordinating groups; teaching, training, verification, and becoming a proper experimental science; developing better introductory literature… And yet it seems to me that there is a beginning barrier to surpass before you can start creating high-quality craft of rationality, having to do with virtually everyone who tries to think lofty thoughts going instantly astray, or indeed even realizing that a craft of rationality exists and that you ought to be studying cognitive science literature to create it. It’s my hope that my writings, as partial as they are, will serve to surpass this initial barrier. The rest I leave to you.

Fix it and tell us what you did
This Didn’t Have To Happen
Just a bit of humor...
What’s in a name? That which we call a rationalist...
Rational Groups Kick Ass
Instrumental vs. Epistemic—A Bardic Perspective
Programmatic Prediction markets
Cached Procrastination
Practical Advice Backed By Deep Theories

Knowledge of this heuristic might be useful in fighting akrasia.

(alternate summary:)

Practical advice is genuinely much, much more useful when it’s backed up by concrete experimental results, causal models that are actually true, or valid math that is validly interpreted. (Listed in increasing order of difficulty.) Stripping out the theories and giving the mere advice alone wouldn’t have nearly the same impact or even the same message; and oddly enough, translating experiments and math into practical advice seems to be a rare niche activity relative to academia. If there’s a distinctive LW style, this is it.

“Self-pretending” is not as useful as we think
Where’s Your Sense of Mystery?
Less Meta

The fact that this final series was on the craft and the community seems to have delivered a push in something of the wrong direction, (a) steering toward conversation about conversation and (b) making present accomplishment pale in the light of grander dreams. Time to go back to practical advice and deep theories, then.

SIAI call for skilled volunteers and potential interns
The Craft and the Community
Excuse me, would you like to take a survey?
Should we be biased?
Theism, Wednesday, and Not Being Adopted
The End (of Sequences)
Final Words

The conclusion of the Beisutsukai series.

Bayesian Cabaret
Verbal Overshadowing and The Art of Rationality
How Not to be Stupid: Starting Up
How Not to be Stupid: Know What You Want, What You Really Really Want
Epistemic vs. Instrumental Rationality: Approximations
What is control theory, and why do you need to know about it?
Re-formalizing PD
Generalizing From One Example

Generalization From One Example is a tendency to pay too much attention to the few anecdotal pieces of evidence you experienced, and model some general phenomenon based on them. This is a special case of availability bias, and the way in which the mistake unfolds is closely related to the correspondence bias and the hindsight bias.

Wednesday depends on us.
How to come up with verbal probabilities
Fighting Akrasia: Incentivising Action
Fire and Motion
Fiction of interest
How Not to be Stupid: Adorable Maybes
Rationalistic Losing
Rationalist Role in the Information Age
Conventions and Confusing Continuity Conundrums
Open Thread: May 2009
Second London Rationalist Meeting upcoming—Sunday 14:00
TED Talks for Less Wrong
The mind-killer
What I Tell You Three Times Is True
Return of the Survey
Essay-Question Poll: Dietary Choices
Allais Hack—Transform Your Decisions!
Without models
Bead Jar Guesses

Applied scenario about forming priors.

Special Status Needs Special Support
How David Beats Goliath
How to use “philosophical majoritarianism”
Off Topic Thread: May 2009
Introduction Thread: May 2009
Consider Representative Data Sets
No Universal Probability Space
Wiki.lesswrong.com Is Live
Hardened Problems Make Brittle Models
Beware Trivial Inconveniences
On the Fence? Major in CS
Rationality is winning—or is it?
The First Koan: Drinking the Hot Iron Ball
Epistemic vs. Instrumental Rationality: Case of the Leaky Agent
Replaying History
Framing Consciousness
A Request for Open Problems
How Not to be Stupid: Brewing a Nice Cup of Utilitea
Step Back
You Are A Brain
No One Knows Stuff
Willpower Hax #487: Execute by Default
Rationality in the Media: Don’t (New Yorker, May 2009)
Survey Results
A Parable On Obsolete Ideologies
“Open-Mindedness”—the video
Religion, Mystery, and Warm, Soft Fuzzies
Cheerios: An “Untested New Drug”
Essay-Question Poll: Voting
Outward Change Drives Inward Change
Share Your Anti-Akrasia Tricks
Wanting to Want
“What Is Wrong With Our Thoughts”
Bad reasons for a rationalist to lose
Supernatural Math
Rationality quotes—May 2009
Positive Bias Test (C++ program)
Catchy Fallacy Name Fallacy (and Supporting Disagreement)
Inhibition and the Mind
Least Signaling Activities?
Brute-force Music Composition
Changing accepted public opinion and Skynet
Homogeneity vs. heterogeneity (or, What kind of sex is most moral?)
Saturation, Distillation, Improvisation: A Story About Procedural Knowledge And Cookies
This Failing Earth
The Wire versus Evolutionary Psychology
Dissenting Views
Eric Drexler on Learning About Everything
Anime Explains the Epimenides Paradox
Do Fandoms Need Awfulness?
Can we create a function that provably predicts the optimization power of intelligences?
Image vs. Impact: Can public commitment be counterproductive for achievement?
A social norm against unjustified opinions?
Taking Occam Seriously
The Onion Goes Inside The Biased Mind
The Frontal Syndrome
Open Thread: June 2009
Concrete vs Contextual values
Bioconservative and biomoderate singularitarian positions
Would You Slap Your Father? Article Linkage and Discussion
With whom shall I diavlog?
Mate selection for the men here
Third London Rationalist Meeting
Post Your Utility Function
Probability distributions and writing style
My concerns about the term ‘rationalist’
Honesty: Beyond Internal Truth
Macroeconomics, The Lucas Critique, Microfoundations, and Modeling in General
indexical uncertainty and the Axiom of Independence
London Rationalist Meetups bikeshed painting thread
The Aumann’s agreement theorem game (guess 23 of the average)
Expected futility for humans
You can’t believe in Bayes
Less wrong economic policy
The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It
Let’s reimplement EURISKO!
If it looks like utility maximizer and quacks like utility maximizer...
Typical Mind and Politics
Why safety is not safe
Rationality Quotes—June 2009
Readiness Heuristics
The two meanings of mathematical terms
The Laws of Magic
Intelligence enhancement as existential risk mitigation
Rationalists lose when others choose
Ask LessWrong: Human cognitive enhancement now?
Don’t Count Your Chickens...
Applied Picoeconomics
Representative democracy awesomeness hypothesis
The Physiology of Willpower
Time to See If We Can Apply Anything We Have Learned
Cascio in The Atlantic, more on cognitive enhancement as existential risk mitigation
ESR’s comments on some EY:OB/​LW posts
Nonparametric Ethics
Shane Legg on prospect theory and computational finance
The Domain of Your Utility Function
The Monty Maul Problem
Guilt by Association
Lie to me?
Richard Dawkins TV—Baloney Detection Kit video
Coming Out
The Great Brain is Located Externally

People don’t actually remember much of what they know, they only remember how to find it, and the fact that there is something to find. Thus, it’s important to know about what’s known in various domains, even without knowing the content.

Controlling your inner control circuits
What’s In A Name?
Atheism = Untheism + Antitheism
Book Review: Complications
Open Thread: July 2009
Fourth London Rationalist Meeting?
Rationality Quotes—July 2009
Harnessing Your Biases
Avoiding Failure: Fallacy Finding
Not Technically Lying
The enemy within
Media bias
Can chess be a game of luck?
The Dangers of Partial Knowledge of the Way: Failing in School
An interesting speed dating study
Can self-help be bad for you?
Causality does not imply correlation
Formalized math: dream vs reality
Causation as Bias (sort of)
Debate: Is short term planning in humans due to a short life or due to bias?
Jul 12 Bay Area meetup—Hanson, Vassar, Yudkowsky
Our society lacks good self-preservation mechanisms
Good Quality Heuristics
How likely is a failure of nuclear deterrence?
The Strangest Thing An AI Could Tell You
“Sex Is Always Well Worth Its Two-Fold Cost”
The Dirt on Depression
Fair Division of Black-Hole Negentropy: an Introduction to Cooperative Game Theory
Absolute denial for atheists
Causes of disagreements
The Popularization Bias
Zwicky’s Trifecta of Illusions
Are You Anosognosic?
Article upvoting
Sayeth the Girl
Timeless Decision Theory: Problems I Can’t Solve
An Akrasia Anecdote
Being saner about gender and rationality
Are you crazy?
Counterfactual Mugging v. Subjective Probability
Creating The Simple Math of Everything
Joint Distributions and the Slow Spread of Good Ideas
Chomsky, Sports Talk Radio, Media Bias, and Me
Outside Analysis and Blind Spots
Shut Up And Guess
Of Exclusionary Speech and Gender Politics
Missing the Trees for the Forest
Deciding on our rationality focus
Fairness and Geometry
It’s all in your head-land
An observation on cryocrastination
The Price of Integrity
Are calibration and rational decisions mutually exclusive? (Part one)
The Nature of Offense

People are offended by grabs for status.

AndrewH’s observation and opportunity costs
Are calibration and rational decisions mutually exclusive? (Part two)
Celebrate Trivial Impetuses
Freaky Fairness
Five Stages of Idolatry
Bayesian Flame
The Second Best
Bayesian Utility: Representing Preference by Probability Measures
Thomas C. Schelling’s “Strategy of Conflict”
Information cascades in scientific practice
The Obesity Myth
The Hero With A Thousand Chances
Pract: A Guessing and Testing Game
An Alternative Approach to AI Cooperation
Open Thread: August 2009
Pain
Suffering
Why You’re Stuck in a Narrative
Unspeakable Morality
The Difficulties of Potential People and Decision Making
Wits and Wagers
The usefulness of correlations
She Blinded Me With Science
The Machine Learning Personality Test
A Normative Rule for Decision-Changing Metrics
Rationality Quotes—August 2009
Why Real Men Wear Pink
The Objective Bayesian Programme
LW/​OB Rationality Quotes—August 2009
Exterminating life is rational
Robin Hanson’s lists of Overcoming Bias Posts
Fighting Akrasia: Finding the Source
A note on hypotheticals
Dreams with Damaged Priors
Would Your Real Preferences Please Stand Up?
Calibration fail
Guess Again
Misleading the witness
Utilons vs. Hedons
Deleting paradoxes with fuzzy logic
Sense, Denotation and Semantics
Towards a New Decision Theory
Fighting Akrasia: Survey Design Help Request
Minds that make optimal use of small amounts of sensory data
Bloggingheads: Yudkowsky and Aaronson talk about AI and Many-worlds
Oh my God! It’s full of Nash equilibria!
Happiness is a Heuristic
Experiential Pica
Friendlier AI through politics
Singularity Summit 2009 (quick post)
Scott Aaronson’s “On Self-Delusion and Bounded Rationality”
Ingredients of Timeless Decision Theory
You have just been Counterfactually Mugged!
Evolved Bayesians will be biased
How inevitable was modern human civilization—data
Timeless Decision Theory and Meta-Circular Decision Theory
ESR’s New Take on Qualia
The Journal of (Failed) Replication Studies
Working Mantras
Decision theory: An outline of some upcoming posts
How does an infovore manage information overload?
Confusion about Newcomb is confusion about counterfactuals
Mathematical simplicity bias and exponential functions
A Rationalist’s Bookshelf: The Mind’s I (Douglas Hofstadter and Daniel Dennett, 1981)
Pittsburgh Meetup: Survey of Interest
Paper: Testing ecological models
The Twin Webs of Knowledge
Don’t be Pathologically Mugged!
Some counterevidence for human sociobiology
Cookies vs Existential Risk
Argument Maps Improve Critical Thinking
Great post on Reddit about accepting atheism
Optimal Strategies for Reducing Existential Risk
Open Thread: September 2009
Rationality Quotes—September 2009
LW/​OB Quotes—Fall 2009
Knowing What You Know
Decision theory: Why we need to reduce “could”, “would”, “should”
The Featherless Biped
The Sword of Good
Torture vs. Dust vs. the Presumptuous Philosopher: Anthropic Reasoning in UDT
Notes on utility function experiment
Counterfactual Mugging and Logical Uncertainty
Bay Area OBLW Meetup Sep 12
Decision theory: Why Pearl helps reduce “could” and “would”, but still leaves us with at least three alternatives
Forcing Anthropics: Boltzmann Brains
Why I’m Staying On Bloggingheads.tv
An idea: Sticking Point Learning
FHI postdoc at Oxford
Outlawing Anthropics: An Updateless Dilemma
Let Them Debate College Students
Pittsburgh Meetup: Saturday 912, 6:30PM, CMU
The Lifespan Dilemma
Formalizing informal logic
Timeless Identity Crisis
The New Nostradamus
Formalizing reflective inconsistency
Beware of WEIRD psychological samples
The Absent-Minded Driver
What is the Singularity Summit?
Sociosexual Orientation Inventory, or failing to perform basic sanity checks
Quantum Russian Roulette
MWI, weird quantum experiments and future-directed continuity of conscious experience
Minneapolis Meetup: Survey of interest
Hypothetical Paradoxes
Reason as memetic immune disorder
How to use SMILE to solve Bayes Nets
The Finale of the Ultimate Meta Mega Crossover
Ethics as a black box function
Avoiding doomsday: a “proof” of the self-indication assumption
Anthropic reasoning and correlated decision making
Boredom vs. Scope Insensitivity
Minneapolis Meetup, This Saturday (26th) at 3:00 PM, University of Minnesota
The utility curve of the human population
Solutions to Political Problems As Counterfactuals
Non-Malthusian Scenarios
Correlated decision making: a complete theory
The Scylla of Error and the Charybdis of Paralysis
The Anthropic Trilemma
Your Most Valuable Skill
Privileging the Hypothesis
Why Many-Worlds Is Not The Rationally Favored Interpretation
Intuitive differences: when to agree to disagree
NY-area OB/​LW meetup Saturday 103 7 PM
Regular NYC Meetups
Open Thread: October 2009
Why Don’t We Apply What We Know About Twins to Everybody Else?
Are you a Democrat singletonian, or a Republican singletonian?
Scott Aaronson on Born Probabilities
‘oy, girls on lw, want to get together some time?’
When Willpower Attacks
Dying Outside
Don’t Think Too Hard.
The Presumptuous Philosopher’s Presumptuous Friend
The First Step is to Admit That You Have a Problem
Let them eat cake: Interpersonal Problems vs Tasks
New Haven/​Yale Less Wrong Meetup: 5 pm, Monday October 12
Boston Area Less Wrong Meetup: 2 pm Sunday October 11th
LW Meetup Google Calendar
I’m Not Saying People Are Stupid
How to get that Friendly Singularity: a minority view
The Argument from Witness Testimony
What Program Are You?
Do the ‘unlucky’ systematically underestimate high-variance strategies?
Anticipation vs. Faith: At What Cost Rationality?
The power of information?
Quantifying ethicality of human actions
BHTV: Eliezer Yudkowsky and Andrew Gelman
We’re in danger. I must tell the others...
PredictionBook.com—Track your calibration
The Shadow Question
Information theory and FOOM
Waterloo, ON, Canada Meetup: 6pm Sun Oct 18 ’09!
How to think like a quantum monadologist
Localized theories and conditional complexity
Applying Double Standards to “Divisive” Ideas
Near and far skills
Shortness is now a treatable condition
Lore Sjoberg’s Life-Hacking FAQK
Why the beliefs/​values dichotomy?
Rationality Quotes: October 2009
The continued misuse of the Prisoner’s Dilemma
Better thinking through experiential games
Extreme risks: when not to use expected utility
Pound of Feathers, Pound of Gold
Arrow’s Theorem is a Lie
The Value of Nature and Old Books
Circular Altruism vs. Personal Preference
Computer bugs and evolution
Doing your good deed for the day
Expected utility without the independence axiom
Post retracted: If you follow expected utility, expect to be money-pumped
A Less Wrong Q&A with Eliezer (Step 1: The Proposition)
David Deutsch: A new way to explain explanation
Less Wrong /​ Overcoming Bias meet-up groups
Our House, My Rules
Open Thread: November 2009
Re-understanding Robin Hanson’s “Pre-Rationality”
Rolf Nelson’s “The Rational Entrepreneur”
Money pumping: the axiomatic approach
Light Arts
News: Improbable Coincidence Slows LHC Repairs
Bay area LW meet-up
All hail the Lisbon Treaty! Or is that “hate”? Or just “huh”?
Hamster in Tutu Shuts Down Large Hadron Collider
The Danger of Stories
Practical rationality in surveys
Reflections on Pre-Rationality
Rationality advice from Terry Tao
Restraint Bias
What makes you YOU? For non-deists only.
Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions
Test Your Calibration!
Anti-Akrasia Technique: Structured Procrastination
Boston meetup Nov 15 (and others)
Consequences of arbitrage: expected cash
Auckland meet up Saturday Nov 28th
The Academic Epistemology Cross Section: Who Cares More About Status?
BHTV: Yudkowsky /​ Robert Greene
Why (and why not) Bayesian Updating?
Efficient prestige hypothesis
A Less Wrong singularity article?
Request For Article: Many-Worlds Quantum Computing
The One That Isn’t There
Calibration for continuous quantities
Friedman on Utility
Rational lies
In conclusion: in the land beyond money pumps lie extreme events
How to test your mental performance at the moment?
Agree, Retort, or Ignore? A Post From the Future
Contrarianism and reference class forecasting
Getting Feedback by Restricting Content
Rooting Hard for Overpriced M&Ms
A Nightmare for Eliezer
Rationality Quotes November 2009
Morality and International Humanitarian Law
Action vs. inaction
The Moral Status of Independent Identical Copies
Call for new SIAI Visiting Fellows, on a rolling basis
Open Thread: December 2009
The Difference Between Utility and Utility
11 core rationalist skills
Help Roko become a better rationalist!
Intuitive supergoal uncertainty
Frequentist Statistics are Frequently Subjective
Arbitrage of prediction markets
Parapsychology: the control group for science
Science—Idealistic Versus Signaling
You Be the Jury: Survey on a Current Event
Probability Space & Aumann Agreement
What Are Probabilities, Anyway?
The persuasive power of false confessions
A question of rationality
The Amanda Knox Test: How an Hour on the Internet Beats a Year in the Courtroom
Against picking up pennies
Previous Post Revised
Man-with-a-hammer syndrome
Rebasing Ethics
Getting Over Dust Theory
Philadelphia LessWrong Meetup, December 16th
An account of what I believe to be inconsistent behavior on the part of our editor
December 2009 Meta Thread
Reacting to Inadequate Data
The Contrarian Status Catch-22
Any sufficiently advanced wisdom is indistinguishable from bullshit
Fundamentally Flawed, or Fast and Frugal?
Sufficiently Advanced Sanity
Mandating Information Disclosure vs. Banning Deceptive Contract Terms
If reason told you to jump off a cliff, would you do it?
The Correct Contrarian Cluster
Karma Changes
lessmeta
The 9/​11 Meta-Truther Conspiracy Theory
Two Truths and a Lie
On the Power of Intelligence and Rationality
Are these cognitive biases, biases?
Positive-affect-day-Schelling-point-mas Meetup
Playing the Meta-game
A Master-Slave Model of Human Preferences
That other kind of status
Singularity Institute $100K Challenge Grant /​ 2009 Donations Reminder
Boksops—Ancient Superintelligence?
New Year’s Resolutions Thread
New Year’s Predictions Thread
End of 2009 articles

__NOTOC__

No comments.