LW was started to help altruists

The following excerpt from a recent post, Recursively Self-Improving Human Intelligence, suggests to me that it is time for a reminder of the reason LW was started.

“[C]an anyone think of specific ways in which we can improve ourselves via iterative cycles? Is there a limit to how far we can currently improve our abilities by improving our abilities to improve our abilities? Or are these not the right questions; the concept a mere semantic illusion[?]”

These are not the right questions—not because the concept is a semantic illusion, but rather because the questions are a little too selfish. I hope the author of the above words does not mind my saying that. It is the hope of the people who started this site (and my hope) that the readers of LW will eventually turn from the desire to improve their selves to the desire to improve the world. How the world (i.e., human civilization) can recursively self-improve has been extensively discussed on LW.

Eliezer started devoting a significant portion of his time and energy to non-selfish pursuits when he was still a teenager, and in the 12 years since then, he has definitely spent more of his time and energy improving the world than improving his self (where “self” is defined to include his income, status, access to important people and other elements of his situation). About 3 years ago, when she was 28 or 29, Anna Salamon started spending most of her waking hours trying to improve the world. Both will almost certainly devote the majority of rest of their lives to altruistic goals.

Self-improvement cannot be ignored or neglected even by pure altruists because the vast majority of people are not rational enough to cooperate with an Eliezer or an Anna without just slowing them down and the vast majority are not rational enough to avoid catastrophic mistakes were they to try without supervision to wield the most potent methods for improving the world. In other words, self-improvement cannot be ignored because now that we have modern science and technology, it takes more rationality than most people have just to be able to tell good from evil where “good” is defined as the actions that actually improve the world.

One of the main reasons Eliezer started LW is to increase the rationality of altruists and of people who will become altruists. In other words, of people committed to improving the world. (The other main reason is recruitment for Eliezer’s altruistic FAI project and altruistic organization). If the only people whose rationality they could hope to increase through LW were completely selfish, Eliezer and Anna would probably have put a lot less time and energy into posting rationality clues on LW and a lot more into other altruistic plans.

Most altruists who are sufficiently strategic about their altruism come to believe that improving the effectiveness of other altruists is an extremely potent way to improve the world. Anna for example spends vastly more of her time and energy improving the rationality of other altruists than she spends improving her own rationality because that is the allotment of her resources that maximizes her altruistic goal of improving the world. Even the staff of the Singularity Institute who do not have Anna’s teaching and helping skills and who consequently specialize in math, science and computers spend a significant fraction of their resources trying to improve the rationality of other altruists.

In summary, although no one (that I know of) is opposed to self-improvement’s being the focus of most of the posts on LW and no one is opposed to non-altruists’ using the site for self-improvement, this site was founded in the hope of increasing the rationality of altruists.