Rationality Reading Group: Part W: Quantified Humanism

This is part of a semi-monthly reading group on Eliezer Yudkowsky’s ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post.


Welcome to the Rationality reading group. This fortnight we discuss Part W: Quantified Humanism (pp. 1453-1514) and Interlude: The Twelve Virtues of Rationality (pp. 1516-1521). This post summarizes each article of the sequence, linking to the original LessWrong post where available.

W. Quantified Humanism

281. Scope InsensitivityThe human brain can’t represent large quantities: an environmental measure that will save 200,000 birds doesn’t conjure anywhere near a hundred times the emotional impact and willingness-to-pay of a measure that would save 2,000 birds.

282. One Life Against the WorldSaving one life and saving the whole world provide the same warm glow. But, however valuable a life is, the whole world is billions of times as valuable. The duty to save lives doesn’t stop after the first saved life. Choosing to save one life when you could have saved two is as bad as murder.

283. The Allais ParadoxOffered choices between gambles, people make decision-theoretically inconsistent decisions.

284. Zut Allais! - Eliezer’s second attempt to explain the Allais Paradox, this time drawing motivational background from the heuristics and biases literature on incoherent preferences and the certainty effect.

285. Feeling MoralOur moral preferences shouldn’t be circular. If a policy A is better than B, and B is better than C, and C is better than D, and so on, then policy A really should be better than policy Z.

286. The “Intuitions” for “Utilitarianism”Our intuitions, the underlying cognitive tricks that we use to build our thoughts, are an indispensable part of our cognition. The problem is that many of those intuitions are incoherent, or are undesirable upon reflection. But if you try to “renormalize” your intuition, you wind up with what is essentially utilitarianism.

287. Ends Don’t Justify Means (Among Humans) - Humans have evolved adaptations that allow them to simultaneously deceive themselves into thinking that their policy suggestions are helpful to the tribe and actually enact policies that are self-serving. As a general rule, there are certain things that you should never do, even if you come up with persuasive reasons that they’re good for the tribe.

288. Ethical InjunctionsUnderstanding more about ethics should make your moral choices stricter, but people usually use a surface-level knowledge of moral reasoning as an excuse to make their moral choices more lenient.

289. Something to ProtectMany people only start to grow as a rationalist when they find something that they care about more than they care about rationality itself. It takes something really scary to cause you to override your intuitions with math.

290. When (Not) to Use ProbabilitiesWhen you don’t have a numerical procedure to generate probabilities, you’re probably better off using your own evolved abilities to reason in the presence of uncertainty.

291. Newcomb’s Problem and Regret of RationalityNewcomb’s problem is a very famous decision theory problem in which the rational move appears to be consistently punished. This is the wrong attitude to take. Rationalists should win. If your particular ritual of cognition consistently fails to yield good results, change the ritual.

Interlude: The Twelve Virtues of Rationality


This has been a collection of notes on the assigned sequence for this fortnight. The most important part of the reading group though is discussion, which is in the comments section. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

The next reading will cover Beginnings: An Introduction (pp. 1527-1530) and Part X: Yudkowsky’s Coming of Age (pp. 1535-1601). The discussion will go live on Wednesday, 6 April 2016, right here on the discussion forum of LessWrong.