The Least Convenient Possible World

Related to: Is That Your True Rejection?

“If you’re interested in being on the right side of disputes, you will refute your opponents’ arguments. But if you’re interested in producing truth, you will fix your opponents’ arguments for them. To win, you must fight not only the creature you encounter; you must fight the most horrible thing that can be constructed from its corpse.”

-- Black Belt Bayesian, via Rationality Quotes 13

Yesterday John Maxwell’s post wondered how much the average person would do to save ten people from a ruthless tyrant. I remember asking some of my friends a vaguely related question as part of an investigation of the Trolley Problems:

You are a doctor in a small rural hospital. You have ten patients, each of whom is dying for the lack of a separate organ; that is, one person needs a heart transplant, another needs a lung transplant, another needs a kidney transplant, and so on. A traveller walks into the hospital, mentioning how he has no family and no one knows that he’s there. All of his organs seem healthy. You realize that by killing this traveller and distributing his organs among your patients, you could save ten lives. Would this be moral or not?

I don’t want to discuss the answer to this problem today. I want to discuss the answer one of my friends gave, because I think it illuminates a very interesting kind of defense mechanism that rationalists need to be watching for. My friend said:

It wouldn’t be moral. After all, people often reject organs from random donors. The traveller would probably be a genetic mismatch for your patients, and the transplantees would have to spend the rest of their lives on immunosuppressants, only to die within a few years when the drugs failed.

On the one hand, I have to give my friend credit: his answer is biologically accurate, and beyond a doubt the technically correct answer to the question I asked. On the other hand, I don’t have to give him very much credit: he completely missed the point and lost a valuable effort to examine the nature of morality.

So I asked him, “In the least convenient possible world, the one where everyone was genetically compatible with everyone else and this objection was invalid, what would you do?”

He mumbled something about counterfactuals and refused to answer. But I learned something very important from him, and that is to always ask this question of myself. Sometimes the least convenient possible world is the only place where I can figure out my true motivations, or which step to take next. I offer three examples:

1: Pascal’s Wager. Upon being presented with Pascal’s Wager, one of the first things most atheists think of is this:

Perhaps God values intellectual integrity so highly that He is prepared to reward honest atheists, but will punish anyone who practices a religion he does not truly believe simply for personal gain. Or perhaps, as the Discordians claim, “Hell is reserved for people who believe in it, and the hottest levels of Hell are reserved for people who believe in it on the principle that they’ll go there if they don’t.”

This is a good argument against Pascal’s Wager, but it isn’t the least convenient possible world. The least convenient possible world is the one where Omega, the completely trustworthy superintelligence who is always right, informs you that God definitely doesn’t value intellectual integrity that much. In fact (Omega tells you) either God does not exist or the Catholics are right about absolutely everything.

Would you become a Catholic in this world? Or are you willing to admit that maybe your rejection of Pascal’s Wager has less to do with a hypothesized pro-atheism God, and more to do with a belief that it’s wrong to abandon your intellectual integrity on the off chance that a crazy deity is playing a perverted game of blind poker with your eternal soul?

2: The God-Shaped Hole. Christians claim there is one in every atheist, keeping him from spiritual fulfillment.

Some commenters on Raising the Sanity Waterline don’t deny the existence of such a hole, if it is intepreted as a desire for purpose or connection to something greater than one’s self. But, some commenters say, science and rationality can fill this hole even better than God can.

What luck! Evolution has by a wild coincidence created us with a big rationality-shaped hole in our brains! Good thing we happen to be rationalists, so we can fill this hole in the best possible way! I don’t know—despite my sarcasm this may even be true. But in the least convenient possible world, Omega comes along and tells you that sorry, the hole is exactly God-shaped, and anyone without a religion will lead a less-than-optimally-happy life. Do you head down to the nearest church for a baptism? Or do you admit that even if believing something makes you happier, you still don’t want to believe it unless it’s true?

3: Extreme Altruism. John Maxwell mentions the utilitarian argument for donating almost everything to charity.

Some commenters object that many forms of charity, especially the classic “give to starving African orphans,” are counterproductive, either because they enable dictators or thwart the free market. This is quite true.

But in the least convenient possible world, here comes Omega again and tells you that Charity X has been proven to do exactly what it claims: help the poor without any counterproductive effects. So is your real objection the corruption, or do you just not believe that you’re morally obligated to give everything you own to starving Africans?

You may argue that this citing of convenient facts is at worst a venial sin. If you still get to the correct answer, and you do it by a correct method, what does it matter if this method isn’t really the one that’s convinced you personally?

One easy answer is that it saves you from embarrassment later. If some scientist does a study and finds that people really do have a god-shaped hole that can’t be filled by anything else, no one can come up to you and say “Hey, didn’t you say the reason you didn’t convert to religion was because rationality filled the god-shaped hole better than God did? Well, I have some bad news for you...”

Another easy answer is that your real answer teaches you something about yourself. My friend may have successfully avoiding making a distasteful moral judgment, but he didn’t learn anything about morality. My refusal to take the easy way out on the transplant question helped me develop the form of precedent-utilitarianism I use today.

But more than either of these, it matters because it seriously influences where you go next.

Say “I accept the argument that I need to donate almost all my money to poor African countries, but my only objection is that corrupt warlords might get it instead”, and the obvious next step is to see if there’s a poor African country without corrupt warlords (see: Ghana, Botswana, etc.) and donate almost all your money to them. Another acceptable answer would be to donate to another warlord-free charitable cause like the Singularity Institute.

If you just say “Nope, corrupt dictators might get it,” you may go off and spend the money on a new TV. Which is fine, if a new TV is what you really want. But if you’re the sort of person who would have been convinced by John Maxwell’s argument, but you dismissed it by saying “Nope, corrupt dictators,” then you’ve lost an opportunity to change your mind.

So I recommend: limit yourself to responses of the form “I completely reject the entire basis of your argument” or “I accept the basis of your argument, but it doesn’t apply to the real world because of contingent fact X.” If you just say “Yeah, well, contigent fact X!” and walk away, you’ve left yourself too much wiggle room.

In other words: always have a plan for what you would do in the least convenient possible world.