The Design Space of Minds-In-General

Peo­ple ask me, “What will Ar­tifi­cial In­tel­li­gences be like? What will they do? Tell us your amaz­ing story about the fu­ture.”

And lo, I say unto them, “You have asked me a trick ques­tion.”

ATP syn­thase is a molec­u­lar ma­chine—one of three known oc­ca­sions when evolu­tion has in­vented the freely ro­tat­ing wheel—which is es­sen­tially the same in an­i­mal mi­to­chon­dria, plant chloro­plasts, and bac­te­ria. ATP syn­thase has not changed sig­nifi­cantly since the rise of eu­kary­otic life two billion years ago. It’s is some­thing we all have in com­mon - thanks to the way that evolu­tion strongly con­serves cer­tain genes; once many other genes de­pend on a gene, a mu­ta­tion will tend to break all the de­pen­den­cies.

Any two AI de­signs might be less similar to each other than you are to a petu­nia.

Ask­ing what “AIs” will do is a trick ques­tion be­cause it im­plies that all AIs form a nat­u­ral class. Hu­mans do form a nat­u­ral class be­cause we all share the same brain ar­chi­tec­ture. But when you say “Ar­tifi­cial In­tel­li­gence”, you are refer­ring to a vastly larger space of pos­si­bil­ities than when you say “hu­man”. When peo­ple talk about “AIs” we are re­ally talk­ing about minds-in-gen­eral, or op­ti­miza­tion pro­cesses in gen­eral. Hav­ing a word for “AI” is like hav­ing a word for ev­ery­thing that isn’t a duck.

Imag­ine a map of mind de­sign space… this is one of my stan­dard di­a­grams...


All hu­mans, of course, fit into a tiny lit­tle dot—as a sex­u­ally re­pro­duc­ing species, we can’t be too differ­ent from one an­other.

This tiny dot be­longs to a wider el­lipse, the space of tran­shu­man mind de­signs—things that might be smarter than us, or much smarter than us, but which in some sense would still be peo­ple as we un­der­stand peo­ple.

This tran­shu­man el­lipse is within a still wider vol­ume, the space of posthu­man minds, which is ev­ery­thing that a tran­shu­man might grow up into.

And then the rest of the sphere is the space of minds-in-gen­eral, in­clud­ing pos­si­ble Ar­tifi­cial In­tel­li­gences so odd that they aren’t even posthu­man.

But wait—nat­u­ral se­lec­tion de­signs com­plex ar­ti­facts and se­lects among com­plex strate­gies. So where is nat­u­ral se­lec­tion on this map?

So this en­tire map re­ally floats in a still vaster space, the space of op­ti­miza­tion pro­cesses. At the bot­tom of this vaster space, be­low even hu­mans, is nat­u­ral se­lec­tion as it first be­gan in some tidal pool: mu­tate, repli­cate, and some­times die, no sex.

Are there any pow­er­ful op­ti­miza­tion pro­cesses, with strength com­pa­rable to a hu­man civ­i­liza­tion or even a self-im­prov­ing AI, which we would not rec­og­nize as minds? Ar­guably Mar­cus Hut­ter’s AIXI should go in this cat­e­gory: for a mind of in­finite power, it’s awfully stupid—poor thing can’t even rec­og­nize it­self in a mir­ror. But that is a topic for an­other time.

My pri­mary moral is to re­sist the temp­ta­tion to gen­er­al­ize over all of mind de­sign space

If we fo­cus on the bounded sub­space of mind de­sign space which con­tains all those minds whose makeup can be speci­fied in a trillion bits or less, then ev­ery uni­ver­sal gen­er­al­iza­tion that you make has two to the trillionth power chances to be falsified.

Con­versely, ev­ery ex­is­ten­tial gen­er­al­iza­tion—“there ex­ists at least one mind such that X” - has two to the trillionth power chances to be true.

So you want to re­sist the temp­ta­tion to say ei­ther that all minds do some­thing, or that no minds do some­thing.

The main rea­son you could find your­self think­ing that you know what a fully generic mind will (won’t) do, is if you put your­self in that mind’s shoes—imag­ine what you would do in that mind’s place—and get back a gen­er­ally wrong, an­thro­po­mor­phic an­swer. (Albeit that it is true in at least one case, since you are your­self an ex­am­ple.) Or if you imag­ine a mind do­ing some­thing, and then imag­in­ing the rea­sons you wouldn’t do it—so that you imag­ine that a mind of that type can’t ex­ist, that the ghost in the ma­chine will look over the cor­re­spond­ing source code and hand it back.

Some­where in mind de­sign space is at least one mind with al­most any kind of log­i­cally con­sis­tent prop­erty you care to imag­ine.

And this is im­por­tant be­cause it em­pha­sizes the im­por­tance of dis­cussing what hap­pens, lawfully, and why, as a causal re­sult of a mind’s par­tic­u­lar con­stituent makeup; some­where in mind de­sign space is a mind that does it differ­ently.

Of course you could always say that any­thing which doesn’t do it your way, is “by defi­ni­tion” not a mind; af­ter all, it’s ob­vi­ously stupid. I’ve seen peo­ple try that one too.