How can people write good LW articles?

A comment by AnnaSalamon on her recent article:

good intellectual content

Yes. I wonder if there are somehow spreadable habits of thinking (or of “reading while digesting/​synethesizing/​blog posting”, or …) that could themselves be written up, in order to create more ability from more folks to add good content.

Probably too meta /​ too clever an idea, but may be worth some individual brainstorms?

I wouldn’t presume to write “How To Write Good LessWrong Articles”, but perhaps I’m up to the task of starting a thread on it.

To the point: feel encouraged to skip my thoughts and comment with your own ideas.

The thoughts I ended up writing are, perhaps, more of an argument that it’s still possible to write good new articles and only a little on how to do so:

Several people have suggested to me that perhaps the reason LessWrong has gone mostly silent these days is that there’s only so much to be said on the subject of rationality, and the important things have been thoroughly covered. I think this is easily seen to be false, if you go and look at the mountain of literature related to subjects in the sequences. There is a lot left to be sifted through, synthesized, and explained clearly. Really, there are a lot of things which have only been dealt with in a fairly shallow way on LessWrong and could be given a more thorough treatment. A reasonable algorithm is to dive into academic papers on a subject of interest and write summaries of what you find. I expect there are a lot of interesting things to be uncovered in the existing literature on cognitive biases, economics, game theory, mechanism design, artificial intelligence, algorithms, operations research, public policy, and so on—and that this community would have an interesting spin on those things.

Moreover, I think that “rationality isn’t solved” (simply put). Perhaps you can read a bunch of stuff on here and think that all the answers have been laid out—you form rational beliefs in accord with the laws of probability theory, and make rational decisions by choosing the policy with maximum expected utility; what else is there to know? Or maybe you admit that there are some holes in that story, like the details of TDT vs UDT and the question of logical uncertainty and so on; but you can’t do anything meaningful about that. To such an attitude, I would say: do you know how to put it all into practice? Do you know how to explain it to other people clearly, succinctly, and convincingly? If you try to spell it all out, are there any holes in your understanding? If so, are you deferring to the understanding of the group, or are you deferring to an illusion of group understanding which doesn’t really exist? If something is not quite clear to you, there’s a decent chance that it’s not quite clear to a lot of people; don’t make the mistake of thinking everyone understands but you. And don’t make the mistake of thinking you understand something that you haven’t tried to explain from the start.

I’d encourage a certain kind of pluralistic view of rationality. We don’t have one big equation explaining what a rational agent would look like—there are some good candidates for such an equation, but they have caveats such as requiring unrealistic processing power and dropping anvils on their own heads if offered $10 to do so. The project of specifying one big algorithm—one unifying decision theory—is a worthy one, and such concepts can organize our thinking. But what if we thought of practical rationality as consisting more of a big collection of useful algorithms? I’m thinking along the lines of the book Algorithms to Live By, which gives dozens of algorithms which apply to different aspects of life. Like decision theory, such algorithms give a kind of “rational principle” which we can attempt to follow—to the extent that it applies to our real-life situation. In theory, every one of them would follow from decision theory (or else, would do worse than a decision-theoretic calculation). But as finite beings, we can’t work it all out from decision theory alone—and anyway, as I’ve been harping on, decision theory itself is just a rag-tag collection of proposed algorithms upon closer inspection. So, we could take a more open-ended view of rationality as an attempt to collect useful algorithms, rather than a project that could be finished.

A second, more introspective way of writing LessWrong articles (my first being “dive into the literature”), which I think has a good track record: take a close look at something you see happening in your life or the world and try to make a model of it, try to explain it at a more algorithmic level. I’m thinking of posts like Intellectual Hipsters and Meta-Contrarianism and Slaves to Fashion Signalling.