Rationality: Common Interest of Many Causes

It is a non-so-hidden agenda of this site, Less Wrong, that there are many causes which benefit from the spread of rationality—because it takes a little more rationality than usual to see their case, as a supporter, or even just a supportive bystander. Not just the obvious causes like atheism, but things like marijuana legalization—where you could wish that people were a bit more self-aware about their motives and the nature of signaling, and a bit more moved by inconvenient cold facts. The Institute Which May Not Be Named was merely an unusually extreme case of this, wherein it got to the point that after years of bogging down I threw up my hands and explicitly recursed on the job of creating rationalists.

But of course, not all the rationalists I create will be interested in my own project—and that’s fine. You can’t capture all the value you create, and trying can have poor side effects.

If the supporters of other causes are enlightened enough to think similarly...

Then all the causes which benefit from spreading rationality, can, perhaps, have something in the way of standardized material to which to point their supporters—a common task, centralized to save effort—and think of themselves as spreading a little rationality on the side. They won’t capture all the value they create. And that’s fine. They’ll capture some of the value others create. Atheism has very little to do directly with marijuana legalization, but if both atheists and anti-Prohibitionists are willing to step back a bit and say a bit about the general, abstract principle of confronting a discomforting truth that interferes with a fine righteous tirade, then both atheism and marijuana legalization pick up some of the benefit from both efforts.

But this requires—I know I’m repeating myself here, but it’s important—that you be willing not to capture all the value you create. It requires that, in the course of talking about rationality, you maintain an ability to temporarily shut up about your own cause even though it is the best cause ever. It requires that you don’t regard those other causes, and they do not regard you, as competing for a limited supply of rationalists with a limited capacity for support; but, rather, creating more rationalists and increasing their capacity for support. You only reap some of your own efforts, but you reap some of others’ efforts as well.

If you and they don’t agree on everything—especially priorities—you have to be willing to agree to shut up about the disagreement. (Except possibly in specialized venues, out of the way of the mainstream discourse, where such disagreements are explicitly prosecuted.)

A certain person who was taking over as the president of a certain organization once pointed out that the organization had not enjoyed much luck with its message of “This is the best thing you can do”, as compared to e.g. the X-Prize Foundation’s tremendous success conveying to rich individuals of “Here is a cool thing you can do.”

This is one of those insights where you blink incredulously and then grasp how much sense it makes. The human brain can’t grasp large stakes and people are not anything remotely like expected utility maximizers, and we are generally altruistic akrasics. Saying, “This is the best thing” doesn’t add much motivation beyond “This is a cool thing”. It just establishes a much higher burden of proof. And invites invidious motivation-sapping comparison to all other good things you know (perhaps threatening to diminish moral satisfaction already purchased).

If we’re operating under the assumption that everyone by default is an altruistic akrasic (someone who wishes they could choose to do more)—or at least, that most potential supporters of interest fit this description—then fighting it out over which cause is the best to support, may have the effect of decreasing the overall supply of altruism.

“But,” you say, “dollars are fungible; a dollar you use for one thing indeed cannot be used for anything else!” To which I reply: But human beings really aren’t expected utility maximizers, as cognitive systems. Dollars come out of different mental accounts, cost different amounts of willpower (the true limiting resource) under different circumstances, people want to spread their donations around as an act of mental accounting to minimize the regret if a single cause fails, and telling someone about an additional cause may increase the total amount they’re willing to help.

There are, of course, limits to this principle of benign tolerance. If someone has a project to help stray puppies get warm homes, then it’s probably best to regard them as trying to exploit bugs in human psychology for their personal gain, rather than a worthy sub-task of the great common Neo-Enlightenment project of human progress.

But to the extent that something really is a task you would wish to see done on behalf of humanity… then invidious comparisons of that project to Your-Favorite-Project, may not help your own project as much as you might think. We may need to learn to say, by habit and in nearly all forums, “Here is a cool rationalist project”, not, “Mine alone is the highest-return in expected utilons per marginal dollar project.” If someone cold-blooded enough to maximize expected utility of fungible money without regard to emotional side effects explicitly asks, we could perhaps steer them to a specialized subforum where anyone willing to make the claim of top priority fights it out. Though if all goes well, those projects that have a strong claim to this kind of underserved-ness will get more investment and their marginal returns will go down, and the winner of the competing claims will no longer be clear.

If there are many rationalist projects that benefit from raising the sanity waterline, then their mutual tolerance and common investment in spreading rationality could conceivably exhibit a commons problem. But this doesn’t seem too hard to deal with: if there’s a group that’s not willing to share the rationalists they create or mention to them that other Neo-Enlightenment projects might exist, then any common, centralized rationalist resources could remove the mention of their project as a cool thing to do.

Though all this is an idealistic and future-facing thought, the benefits—for all of us—could be finding some important things we’re missing right now. So many rationalist projects have few supporters and far-flung; if we could all identify as elements of the Common Project of human progress, the Neo-Enlightenment, there would be a substantially higher probability of finding ten of us in any given city. Right now, a lot of these projects are just a little lonely for their supporters. Rationality may not be the most important thing in the world—that, of course, is the thing that we protect—but it is a cool thing that more of us have in common. We might gain much from identifying ourselves also as rationalists.