Beyond the Reach of God, Abridged for Spoken Word

Previously, I posted a version of The Gift We Give Tomorrow that was designed to be read aloud. It was significantly abridged, and some portions reworded to flow better from the tongue. I recently finished another part of my project: An abridged version of Beyond the Reach of God. This one doesn’t lend itself as well to something resembling “poetry,” so it’s more a straightforward editing job. The original was 3315 words. The new one is currently 1090. I’m still trying to trim it a little more, if possible. GWGT was 1245, which was around 7 minutes of speaking time, and pushing the limit of how long the piece can be.

For those who were concerned, after paring this down into a collection of some of the most depressing sentences I’ve ever read, I decided it was NOT necessary to end “Gift We Give Tomorrow” on an echo of this post (although I’m leaving in the part where I reword the “Shadowy Figure” to more directly reference it). That reading will end with the original “Ever so long ago.”

Beyond the Reach of God:

I remember, from distant childhood, what it’s like to live in the world where God exists. Really exists, the way that children and rationalists take all their beliefs at face value.

In the world where God exists, he doesn’t intervene to optimize everything. God won’t make you a sandwich. Parents don’t do everything their children ask. There are good arguments against always giving someone what they desire.

I don’t want to become a simple wanting-thing, that never has to plan or act or think.

But clearly, there’s some threshold of horror, awful enough that God will intervene. I remember that being true, when I believed after the fashion of a child. The God who never intervenes—that’s an obvious attempt to avoid falsification, to protect a belief-in-belief. The beliefs of young children really shape their expectations—they honestly expect to see the dragon in their garage. They have no reason to imagine a loving God who never acts. No loving parents, desiring their child to grow up strong and self-reliant, would let their toddler be run over by a car.

But what if you built a simulated universe? Could you escape the reach of God? Simulate sentient minds, and torture them? If God’s watching everywhere, then of course trying to build an unfair world results in the God intervening—stepping in to modify your transistors. God is omnipresent. There’s no refuge anywhere for true horror.

Life is fair.

But suppose you ask the question: Given such-and-such initial conditions, and given such-and-such rules, what would be the mathematical result?

Not even God can change the answer to that question.

What does life look like, in this imaginary world, where each step follows only from its immediate predecessor? Where things only ever happen, or don’t happen, because of mathematical rules? And where the rules don’t describe a God that checks over each state? What does it look like, the world of pure math, beyond the reach of God?

That world wouldn’t be fair. If the initial state contained the seeds of something that could self-replicate, natural selection might or might not take place. Complex life might or might not evolve. That life might or might not become sentient. That world might have the equivalent of conscious cows, that lacked hands or brains to improve their condition. Maybe they would be eaten by conscious wolves who never thought that they were doing wrong, or cared.

If something like humans evolved, then they would suffer from diseases—not to teach them any lessons, but only because viruses happened to evolve as well. If the people of that world are happy, or unhappy, it might have nothing to do with good or bad choices they made. Nothing to do with free will or lessons learned. In the what-if world, Genghis Khan can murder a million people, and laugh, and be rich, and never be punished, and live his life much happier than the average. Who would prevents it?

And if the Khan tortures people to death, for his own amusement? They might call out for help, perhaps imagining a God. And if you really wrote the program, God *would* intervene, of course. But in the what-if question, there isn’t any God in the system. The victims will be saved only if the right cells happen to be 0 or 1. And it’s not likely that anyone will defy the Khan; if they did, someone would strike them with a sword, and the sword would disrupt their organs and they would die, and that would be the end of that.

So the victims die, screaming, and no one helps them. That is the answer to the what-if question.

...is this world starting to sound familiar?

Could it really be that sentient beings have died, absolutely, for millions of years.… with no soul and no afterlife… not as any grand plan of Nature. Not to teach us about the meaning of life. Not even to teach a profound lesson about what is impossible.

Just dead. Just because.

Once upon a time, I believed that the extinction of humanity was not allowed. And others, who call themselves rationalists, may yet have things they trust. They might be called “positive-sum games”, or “democracy”, or “capitalism”, or “technology”, but they’re sacred. They can’t lead to anything really bad, not without a silver lining. The unfolding history of Earth can’t ever turn from its positive-sum trend to a negative-sum trend. Democracies won’t ever legalize torture. Technology has done so much good, that there can’t possibly be a black swan that breaks the trend and does more harm than all the good up until this point.

Anyone listening, who still thinks that being happy counts for more than anything in life, well, maybe they shouldn’t ponder the unprotectedness of their existence. Maybe think of it just long enough to sign up themselves and their family for cryonics, or write a check to an existential-risk-mitigation agency now and then. Or at least wear a seatbelt and get health insurance and all those other dreary necessary things that can destroy your life if you miss that one step… but aside from that, if you want to be happy, meditating on the fragility of life isn’t going to help.

But I’m speaking now to those who have something to protect.

What can a twelfth-century peasant do to save themselves from annihilation? Nothing. Nature’s challenges aren’t always fair. When you run into a challenge that’s too difficult, you suffer the penalty; when you run into a lethal penalty, you die. That’s how it is for people, and it isn’t any different for planets. Someone who wants to dance the deadly dance with Nature needs to understand what they’re up against: Absolute, utter, exceptionless neutrality.

And knowing this might not save you. It wouldn’t save a twelfth-century peasant, even if they knew. If you think that a rationalist who fully understands the mess they’re in, must be able to find a way out—well, then you trust rationality. Enough said.

Still, I don’t want to create needless despair, so I will say a few hopeful words at this point:

If humanity’s future unfolds in the right way, we might be able to make our future fair(er). We can’t change physics. But we can build some guardrails, and put down some padding.

Someday, maybe, minds will be sheltered. Children might burn a finger or lose a toy, but they won’t ever be run over by cars. A super-intelligence would not be intimidated by a challenge where death is the price of a single failure. The raw universe wouldn’t seem so harsh, would be only another problem to be solved.

The problem is that building an adult is itself an adult challenge. That’s what I finally realized, years ago.

If there is a fair(er) universe, we have to get there starting from this world—the neutral world, the world of hard concrete with no padding. The world where challenges are not calibrated to your skills, and you can die for failing them.

What does a child need to do, to solve an adult problem?