Review: LessWrong Best of 2018 – Epistemology

Cross-posted from Putanumonit. Some of this post is relevant mostly to readers of my blog who aren’t LessWrongers, but you may still be interested in my general thoughts about the essays, the book as an artifact, and the state of the community.


Is there a better way to bid goodbye to 2020 than with a book set of the best Rationalist writing of 2018? I wouldn’t know — Ben Pace, who compiled and edited the set, sent me a review copy and so I spent Christmas day reading the Epistemology entry. So this post is a review, and of more than just the books.

A great thing you’ll notice right away about the books is that they smell exactly like Wiz, the Israeli video game magazine from the 90s that was the joy of my middle school years. A not-so-great thing about the books is that they’re small. The essays are printed in a very small font and the quotes within each essay are printed, for some reason, in an even smaller font. There are rumors that inside the quotes the secrets of the universe are rendered in the tiniest font of all, but I lack the visual acuity to discern if that is the case.

The book set looks almost comical next to the hardcover SlateStarCodex collection on my shelf:

Ironically, this juxtaposition describes the state of the Rationality community when I discovered it in early 2014. That year was Scott Alexander’s unassailable annus mirabilis. In the span of 12 months he taught us about outgroups and the gray tribe, Moloch, fashions, layers and countersignaling, words and categories, toxoplasma, the psychology of EA and of social justice, drugs, other drugs, better drugs, scientific validity, and whale cancer.

The same period for LessWrong is described by Ben Pace in the introduction to the book set as “a dark age from 2014-2017, with contributions declining and the community dispersing”. This led to the LessWrong 2.0 team forming in 2018, and, as the book set can attest, ushering in a true renaissance of rationality writing and intellectual progress.


The Epistemology tome contains the following ten essays, which I’ll refer to by the bolded part:

I like to think of rationality in general and rationalist epistemology in particular as comprising three interdependent domains:

  1. Epistemology for monkeys, i.e. what happens in an individual’s mushy evolved brain as they’re trying (or not) to form true beliefs.

  2. Epistemology for algorithms, i.e. Bayes, logic, computation, and the rest of the rigorous mathematical foundation of truth-seeking AI.

  3. Epistemology for groups, how we arrive or fail to arrive at truth together.

Since humans are algorithmic monkeys in groups, all three branches are relevant to epistemology for humans. The essays, although selected by simple voting and without an eye for comprehensiveness, cover all three domains.

Local Validity is the underwhelming essay of the bunch, and feels like a throwback to 2008 when Eliezer himself was merely laying the groundwork for capital R Rationality. It draws a parallel between valid steps in a mathematical proof, non-fallacious arguments, and a civilization’s laws. It points out that the “game”, whether its deriving mathematical proofs, true conclusions, or maintaining a functioning civilization, requires everyone to abide by rules of validity.

This parallel rings true, but doesn’t seem very novel or productive by itself. Did citizens of erstwhile civilizations follow rules because they understood the game theory of defection equilibria? If they didn’t, would this essay have convinced them? The juice is in understanding where and why humans stray from validity. That’s where Scott comes in.

Varieties could function as the organizational essay for the entire tome. It addresses Local Validity by pointing out that people usually play different games from an honest debate, games like manipulating the Overton window of acceptable debate, manipulating the norms of debate in particular groups, or vying for status supremacy directly.

Scott’s essay drives home the point that actual truth-seeking debate is so rare in many spaces that people often forget it’s an actual thing. Some of the best Rationalist writing of 2020 is about statements-that-aren’t-truth-claims and fights-that-aren’t-debates, such as the writings about simulacra. Rationalists often need a reminder that we’re quite unusual as a group in finding ourselves regularly venturing to the top of the pyramid in search of truth.

Sketch of Good slots in nicely into the pyramid with the idea that collaborative truth-seekers should share the gears of their models, and not just the models’ output. Aside from allowing you to make better updates from a conversation with one person, this technique also improves your ability to update on future evidence that may come and integrate the opinions of more people. It’s an important complement to the technique of double crux.

I will try to communicate some of my own model with regards to a pressing new COVID-related development in this post.

At the top of Scott’s pyramid are generators of disagreement that cannot be resolved with a simple double crux, like heuristics built up of countless bits of evidence and aesthetic disagreements. This is where Nameless comes in.

Sarah explains that style and aesthetics are not arbitrary matters of individual taste, but compress a huge amount of information about a culture and its norms. When you walk into a Sweetgreen cafe, every element of design tells you not just what sort of food you are about to eat but also what sort of people you will be eating it next to and what these people’s attitudes are about a variety of non-salad topics.

Aesthetics are initially manufactured by a creative class that is almost always aligned with the political left. Trying to negate the influence of aesthetics (as some conservatives do), ignore it as a communitarian ploy (libertarians), or being simply blind to it all (rationalists) is a bad move that cedes this important ground to a tribe you may otherwise dislike.

Nameless is, in my opinion, the most impressive essay in the book. It’s charging boldly into new territory and is dense with insight. I hope that rationalists continue to build up on this idea, especially as I am personally becoming more and more convinced that you can’t really argue people into things like rationality, transhumanism, polyamory, Effective Altruism, or anything else I hold important. You have to inspire and seduce them, and that requires understanding beauty in addition to truth.

Loudest Alarm is brief and novel. It rang very true to my wife (who is both demure and constantly worried about imposing on people) but less so to me (my friends and I tend to agree on my main weaknesses).

Toolbox and Law is another essay that will be most useful to people new to rationality and who are confused about the proper role of Bayesian theory in epistemological practice. It seems to have been written as a reply to David Chapman, part of Eliezer’s tireless pwning of meta-, post-, and other too-cool-to-call-themselves-rationalists. Since a big part of rationality as a brand is supporting Eliezer’s caliphate, it is only fitting that he leads the war for the brand’s status. I am quite happy to throw my own memes into the fray as it is called for.

New Technical is a bit too technical me, so at the book’s recommendation I read An Untrollable Mathematician Illustrated instead and got a cool lesson on the work done to bring together probability theory and logical induction. I’m in this weird spot where I know more math than the vast majority of people but vastly less math than e.g. the researchers at MIRI. And so when I read posts about MIRI’s research and the mathematics of AI alignment I’m either bored or hopelessly lost within two paragraphs.


The star of Epistemology is alkjash, with three essays (the remaining two in the sequence are also worthwhile) making the cut. I was extremely excited about Babble and Prune when it came out and ran a meetup about it, and it is now one of my main models for thinking about creativity — itself is an underexplored topic on LessWrong. His suggestion of leaving ambiguity in the text (as the Bible does) to let the readers prune their own meaning informs my approach to Twitter, although I’m still working on bringing the same Biblical spirit to Putanumonit.

This model is closely related to predictive processing (although PP being my recent all-encompassing obsession, I’m liable to think that everything is closely related to it). Babble and Prune mirrors the core structure of hierarchical prediction in which predictions are propagated downward from the abstract and conscious levels to the detailed and subconscious, and only errors (prediction mismatches) propagate back up.

Alkjash connects the model to AI and Google’s algorithms as if anticipating the breakthrough in babbling that is GPT-3. Of course, I was inspired to connect GPT-3 and AI to predictive processing and the future of AI as well.

This is straying from Epistemology a bit, but I do think that it will be an enormously fruitful project to recontextualize rationality through the lens of predictive processing. The main reason I haven’t started on this project yet is because of how huge it is, expanding vastly before me with every step I take in understanding. This may require hundreds of hours of my life to do justice to (perhaps I should run a kickstarter to gauge interest). I’ll probably start by building on this action-informed rethinking of confirmation bias and see how it goes.


In general, the essays are all worth reading. However, I’m not entirely sure they make sense as a (tiny) book. The constraint of picking essays from a single year and going by public vote wasn’t designed to create a collection that coheres together. I would much rather have had essays collected by topic and across years, which may happen eventually with the introduction of concept tags on LessWrong. I wish that Ben had edited a volume that made sense to him, instead of carefully abstaining from putting his finger on the scales.

But of course, Ben’s job is not to edit paper book collections, it’s to build up LessWrong as an online resource and vibrant community. And that job is done fantastically, including with the project of annual reviews. The 2019 review, which is ongoing now, is getting thousands of people to re-read the best essays and discuss them. The real book was the comments we wrote along the way.

By the way, my post on Rationalist Self-Improvement has been nominated for the 2019 collection but not reviewed yet. Please consider writing a review if that post had an effect on you. It may seem slightly unfair to other nominees to ask for this on my own blog but I am also significantly handicapping the karma and visibility of all my posts on LessWrong by cross-posting only after hundreds of people have already read them on Putanumonit. If LW was the only place to read my posts more people would read and review them there, so hopefully this balances out.

Ideally, this book will serve as an encouragement for more people to write on LessWrong. Perhaps I am myself lucky in having found LessWrong during its dark age — I wrote a few bad posts that no one read and, in the absence of a flood of rationalist content to intimidate me, started Putanumonit in 2015.

Now that the dark days are over, you may feel daunted by the quality of writing in the LessWrong collection when thinking about writing yourself. Alkjash reminds you not to set your personal prune to the standards of work that has been already filtered and curated. Unlike the other authors in Epistemology, he himself wrote his first post only in January 2018 and by the end of one year had already progressed to making contributions worthy of inclusion in books.

And finally, this book should be an encouragement for everyone to read LessWrong. I can log on daily to find new and interesting writing, and I can also log on whenever I want smart people’s opinion on something like the new COVID strain and find detailed discussion of everything from the epidemiology to the biology to the investment impacts. New features are being added all the time, like the recent launch of in-post predictions.

2018-2020 LessWrong is very different, in content and tone and structure, from both the early days of the Sequences and the dark ages of the mid-2010s. But the book is a testament to the fact that this new age is a golden one (in shades of green and gray), and you are very much invited to join it.