Rationality Reading Group: Part X: Yudkowsky’s Coming of Age

This is part of a semi-monthly reading group on Eliezer Yudkowsky’s ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post.


Welcome to the Rationality reading group. This fortnight we discuss Beginnings: An Introduction (pp. 1527-1530) and Part X: Yudkowsky’s Coming of Age (pp. 1535-1601). This post summarizes each article of the sequence, linking to the original LessWrong post where available.

Beginnings: An Introduction

X. Yudkowsky’s Coming of Age

292. My Childhood Death Spiral—Wherein Eliezer describes how a history of being rewarded for believing that ‘intelligence is more important than experience or wisdom’ initially led him to dismiss the possibility that most possible smarter-than-human artificial intelligences will cause unvaluable futures if constructed.

293. My Best and Worst MistakeWhen Eliezer went into his death spiral around intelligence, he wound up making a lot of mistakes that later became very useful.

294. Raised in TechnophiliaWhen Eliezer was quite young, it took him a very long time to get to the point where he was capable of considering that the dangers of technology might outweigh the benefits.

295. A Prodigy of RefutationEliezer’s skills at defeating other people’s ideas led him to believe that his own (mistaken) ideas must have been correct.

296. The Sheer Folly of Callow YouthEliezer’s big mistake was when he took a mysterious view of morality.

297. That Tiny Note of DiscordEliezer started to dig himself out of his philosophical hole when he noticed a tiny inconsistency.

298. Fighting a Rearguard Action Against the TruthWhen Eliezer started to consider the possibility of Friendly AI as a contingency plan, he permitted himself a line of retreat. He was now able to slowly start to reconsider positions in his metaethics, and move gradually towards better ideas.

299. My Naturalistic AwakeningEliezer actually looked back and realized his mistakes when he imagined the idea of an optimization process.

300. The Level Above MineThere are people who have acquired more mastery over various fields than Eliezer has over his.

301. The Magnitude of His Own FollyEliezer considers his training as a rationalist to have started the day he realized just how awfully he had screwed up.

302. Beyond the Reach of GodCompare the world in which there is a God, who will intervene at some threshold, against a world in which everything happens as a result of physical laws. Which universe looks more like our own?

303. My Bayesian EnlightenmentThe story of how Eliezer Yudkowsky became a Bayesian.


This has been a collection of notes on the assigned sequence for this fortnight. The most important part of the reading group though is discussion, which is in the comments section. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

The next reading will cover Part Y: Challenging the Difficult (pp. 1605-1647). The discussion will go live on Wednesday, 20 April 2016, right here on the discussion forum of LessWrong.