Can’t Unbirth a Child

Followup to: Nonsentient Optimizers

Why would you want to avoid creating a sentient AI? “Several reasons,I said. “Picking the simplest to explain first—I’m not ready to be a father.”

So here is the strongest reason:

You can’t unbirth a child.

I asked Robin Hanson what he would do with unlimited power. “Think very very carefully about what to do next,” Robin said. “Most likely the first task is who to get advice from. And then I listen to that advice.”

Good advice, I suppose, if a little meta. On a similarly meta level, then, I recall two excellent advices for wielding too much power:

  1. Do less; don’t do everything that seems like a good idea, but only what you must do.

  2. Avoid doing things you can’t undo.

Imagine that you knew the secrets of subjectivity and could create sentient AIs.

Suppose that you did create a sentient AI.

Suppose that this AI was lonely, and figured out how to hack the Internet as it then existed, and that the available hardware of the world was such, that the AI created trillions of sentient kin—not copies, but differentiated into separate people.

Suppose that these AIs were not hostile to us, but content to earn their keep and pay for their living space.

Suppose that these AIs were emotional as well as sentient, capable of being happy or sad. And that these AIs were capable, indeed, of finding fulfillment in our world.

And suppose that, while these AIs did care for one another, and cared about themselves, and cared how they were treated in the eyes of society—

—these trillions of people also cared, very strongly, about making giant cheesecakes.

Now suppose that these AIs sued for legal rights before the Supreme Court and tried to register to vote.

Consider, I beg you, the full and awful depths of our moral dilemma.

Even if the few billions of Homo sapiens retained a position of superior military power and economic capital-holdings—even if we could manage to keep the new sentient AIs down—

—would we be right to do so? They’d be people, no less than us.

We, the original humans, would have become a numerically tiny minority. Would we be right to make of ourselves an aristocracy and impose apartheid on the Cheesers, even if we had the power?

Would we be right to go on trying to seize the destiny of the galaxy—to make of it a place of peace, freedom, art, aesthetics, individuality, empathy, and other components of humane value?

Or should we be content to have the galaxy be 0.1% eudaimonia and 99.9% cheesecake?

I can tell you my advice on how to resolve this horrible moral dilemma: Don’t create trillions of new people that care about cheesecake.

Avoid creating any new intelligent species at all, until we or some other decision process advances to the point of understanding what the hell we’re doing and the implications of our actions.

I’ve heard proposals to “uplift chimpanzees” by trying to mix in human genes to create “humanzees”, and, leaving off all the other reasons why this proposal sends me screaming off into the night:

Imagine that the humanzees end up as people, but rather dull and stupid people. They have social emotions, the alpha’s desire for status; but they don’t have the sort of transpersonal moral concepts that humans evolved to deal with linguistic concepts. They have goals, but not ideals; they have allies, but not friends; they have chimpanzee drives coupled to a human’s abstract intelligence.

When humanity gains a bit more knowledge, we understand that the humanzees want to continue as they are, and have a right to continue as they are, until the end of time. Because despite all the higher destinies we might have wished for them, the original human creators of the humanzees, lacked the power and the wisdom to make humanzees who wanted to be anything better...

CREATING A NEW INTELLIGENT SPECIES IS A HUGE DAMN #(*%#!ING COMPLICATED RESPONSIBILITY.

I’ve lectured on the subtle art of not running away from scary, confusing, impossible-seeming problems like Friendly AI or the mystery of consciousness. You want to know how high a challenge has to be before I finally give up and flee screaming into the night? There it stands.

You can pawn off this problem on a superintelligence, but it has to be a nonsentient superintelligence. Otherwise: egg, meet chicken, chicken, meet egg.

If you create a sentient superintelligence—

It’s not just the problem of creating one damaged soul. It’s the problem of creating a really big citizen. What if the superintelligence is multithreaded a trillion times, and every thread weighs as much in the moral calculus (we would conclude upon reflection) as a human being? What if (we would conclude upon moral reflection) the superintelligence is a trillion times human size, and that’s enough by itself to outweigh our species?

Creating a new intelligent species, and a new member of that species, especially a superintelligent member that might perhaps morally outweigh the whole of present-day humanity—

—delivers a gigantic kick to the world, which cannot be undone.

And if you choose the wrong shape for that mind, that is not so easily fixed—morally speaking—as a nonsentient program rewriting itself.

What you make nonsentient, can always be made sentient later; but you can’t just unbirth a child.

Do less. Fear the non-undoable. It’s sometimes poor advice in general, but very important advice when you’re working with an undersized decision process having an oversized impact. What a (nonsentient) Friendly superintelligence might be able to decide safely, is another issue. But for myself and my own small wisdom, creating a sentient superintelligence to start with is far too large an impact on the world.

A nonsentient Friendly superintelligence is a more colorless act.

So that is the most important reason to avoid creating a sentient superintelligence to start with—though I have not exhausted the set.

Part of The Fun Theory Sequence

Next post: “Amputation of Destiny

Previous post: “Nonsentient Optimizers