Ephemeral correspondence

This is the third of four short es­says that say ex­plic­itly some things that I would tell an in­trigued proto-ra­tio­nal­ist be­fore point­ing them to­wards Ra­tion­al­ity: AI to Zom­bies (and, by ex­ten­sion, most of LessWrong). For most peo­ple here, these es­says will be very old news, as they talk about the in­sights that come even be­fore the se­quences. How­ever, I’ve no­ticed re­cently that a num­ber of fledgling ra­tio­nal­ists haven’t ac­tu­ally been ex­posed to all of these ideas, and there is power in say­ing the ob­vi­ous.

This es­say is cross-posted on Mind­ingOurWay.

Your brain is a ma­chine that builds up mu­tual in­for­ma­tion be­tween its in­sides and its out­sides. It is not only an in­for­ma­tion ma­chine. It is not in­ten­tion­ally an in­for­ma­tion ma­chine. But it is bump­ing into pho­tons and air waves, and it is pro­duc­ing an in­ter­nal map that cor­re­lates with the outer world.

How­ever, there’s some­thing very strange go­ing on in this in­for­ma­tion ma­chine.

Con­sider: part of what your brain is do­ing is build­ing a map of the world around you. This is done au­to­mat­i­cally, with­out much in­put on your part into how the in­ter­nal model should look. When you look at the sky, you don’t get a query which says

Read­ings from the retina in­di­cate that the sky is blue. Rep­re­sent sky as blue in world-model? [Y/​n]

No. The sky just ap­pears blue. That sort of in­for­ma­tion, gleaned from the en­vi­ron­ment, is baked into the map.

You can choose to claim that the sky is green, but you can’t choose to see a green sky.

Most peo­ple don’t iden­tify with the part of their mind that builds the map. That part fades into the back­ground. It’s easy to for­get that it ex­ists, and pre­tend that the things we see are the things them­selves. If you didn’t think too care­fully about how the brain works, you might think that brains im­ple­ment peo­ple in two dis­crete steps: (1) build a map of the world; (2) im­ple­ment a plan­ner that uses this map to figure out how to act.

This is, of course, not at all what hap­pens.

Be­cause, while you can’t choose to see the sky as green, you do get to choose how some parts of the world-model look. When your co-worker says “nice job, pal,” you do get to de­cide whether or not to per­ceive it as a com­ple­ment or an in­sult.

Well, kinda-sorta. It de­pends upon the tone and the per­son. Some peo­ple will au­to­mat­i­cally take it as a com­ple­ment, oth­ers will au­to­mat­i­cally take it as an in­sult. Others will con­sciously dwell on it for hours, wor­ry­ing. But nearly ev­ery­one ex­pe­riences more con­scious con­trol over whether or not to per­ceive some­thing as com­ple­men­tary or in­sult­ing, than whether or not to per­ceive the sky as blue or green.

This is in­tensely weird as a mind de­sign, when you think about it. Why is the ex­ec­u­tive pro­cess re­spon­si­ble for choos­ing what to do also able to mod­ify the world-model? Fur­ther­more, WHY IS THE EXECUTIVE PROCESS RESPONSIBLE FOR CHOOSING WHAT TO DO ALSO ABLE TO MODIFY THE WORLD-MODEL? This is just ob­vi­ously go­ing to lead to hor­rible cog­ni­tive dis­so­nance, self-de­cep­tion, and bias! AAAAAAARGH.

There are “rea­sons” for this, of course. We can look at the evolu­tion­ary his­tory of hu­man brains and get hints as to why the de­sign works like this. A brain has a pretty di­rect link to the color of the sky, whereas it has a very in­di­rect link on the in­ten­tions of oth­ers. It makes sense that one of these would be set au­to­mat­i­cally, while the other would re­quire quite a bit of pro­cess­ing. And it kinda makes sense that the ex­ec­u­tive con­trol pro­cess gets to af­fect the ex­pen­sive com­pu­ta­tions but not the cheap ones (es­pe­cially if the ex­ec­u­tive con­trol func­tion­al­ity origi­nally rose to promi­nence as some sort of pri­or­ity-aware com­pu­ta­tional ex­pe­di­ent).

But from the per­spec­tive of a mind de­signer, it’s bonkers. The world-model-gen­er­a­tor isn’t hooked up di­rectly to re­al­ity! We oc­ca­sion­ally get to choose how parts of the world-model look! We, the tribal mon­keys known for self-de­cep­tion and propen­sity to be ma­nipu­lated, get a say on how the in­for­ma­tion en­g­ine builds the thing which is sup­posed to cor­re­spond to re­al­ity!

(I strug­gle with the word “we” in this con­text, be­cause I don’t have words that differ­en­ti­ate be­tween the broad-sense “me” which builds a map of the world in which the sky is blue, and the nar­row-sense “me” which doesn’t get to choose to see a green sky. I des­per­ately want to shat­ter the word “me” into many words, but these dis­cus­sions already have too much jar­gon, and I have to pick my bat­tles.)

We know a bit about how ma­chines can gen­er­ate mu­tual in­for­ma­tion, you see, and one of the things we know is that in or­der to build some­thing that sees the sky as the ap­pro­pri­ate color, the “sky-color” out­put should not be con­nected to an ar­bi­trary mon­key an­swer­ing a mul­ti­ple choice ques­tion un­der peer pres­sure, but should rather be con­nected di­rectly to the sky-sen­sors.

And some­times the brain does this. Some­times it just frig­gin’ puts a blue sky in the world-model. But other times, for one rea­son or an­other, it tosses queries up to con­scious con­trol.

Ques­tions like “is the sky blue?” and “did my co-worker in­tend that as an in­sult?” are of the same type, and yet one we get in­put on, and the other we don’t. The brain au­to­mat­i­cally builds huge swaths of the map, but im­por­tant fea­tures of it are left up to us.

Which is wor­ry­ing, be­cause most of us aren’t ex­actly nat­u­ral-born mas­ters of in­for­ma­tion the­ory. This is where ra­tio­nal­ity train­ing comes in.

Some­times we get con­scious con­trol over the world-model be­cause the ques­tions are hard. Ex­ec­u­tive con­trol isn’t needed in or­der to de­cide what color the sky is, but it is of­ten nec­es­sary in or­der to de­duce com­plex things (like the mo­ti­va­tions of other mon­keys) from sparse ob­ser­va­tions. Study­ing hu­man ra­tio­nal­ity can im­prove your abil­ity to gen­er­ate more ac­cu­rate an­swers when ex­ec­u­tive-con­trol­ler-you is called upon to fill in fea­tures of the world-model that sub­con­scious-you could not de­duce au­to­mat­i­cally: filling in the men­tal map ac­cu­rately is a skill that, like any skill, can be trained and honed.

Which al­most makes it seem like it’s ok for us to have con­scious con­trol over the world model. It al­most makes it seem fine to let hu­mans con­trol what color they see the sky: af­ter all, they could always choose to leave their per­cep­tion of the sky linked up to the ac­tual sky.

Ex­cept, you and I both know how that would end. Can you imag­ine what would hap­pen if hu­mans ac­tu­ally got to choose what color to per­ceive the sky as, in the same way they get to choose what to be­lieve about the loy­alty of their lovers, the honor of their tribe, the ex­is­tence of their deities?

About six sec­onds later, peo­ple would start dis­agree­ing about the color of the freek­ing sky (be­cause who says that those bi­ased sky-sen­sors are the fi­nal au­thor­ity?) They’d im­me­di­ately split along tribal lines and start mur­der­ing each other. Then, af­ter things calmed down a bit, ev­ery­one would start claiming that be­cause peo­ple get to choose what­ever sky color they want, and be­cause differ­ent peo­ple have differ­ent fa­vorite col­ors, there’s no true sky-color. Color is sub­jec­tive, any­way; it’s all just in our heads. If you tried to sug­gest just hook­ing sky-per­cep­tion up to the sky-sen­sors, you’d prob­a­bly wind up some­where be­tween dead and mocked, de­pend­ing on your time pe­riod.

The sane re­sponse, upon re­al­iz­ing that in­ter­nal color-of-the-sky is de­ter­mined not by the sky-sen­sors, but by a tribal mon­key-mind prone to politick­ing and group­think is to scream in hor­ror and then di­rectly re-at­tach the world-model-gen­er­a­tor to re­al­ity as quickly as pos­si­ble. If your mind gave you a lit­tle pop-up mes­sage reading

For poli­ti­cal rea­sons, it is now pos­si­ble to dis­con­nect your color-per­cep­tion from your reti­nas and let peer pres­sure de­ter­mine what col­ors to see. Pro­ceed? [Y/​n]

then the sane re­sponse, if you are a hu­man mind, is a slightly pan­icked “uh, thanks but no thanKs I’d like to pLeASE LEAVE THE WORLD-MODEL GENERATOR HOOKED UP TO REALITY PLEASE.”

But un­for­tu­nately, these oc­ca­sions don’t feel like pop-up win­dows. They don’t even feel like choices. They’re usu­ally au­to­matic, and they barely hap­pen at the level of con­scious­ness. Your world-model gets dis­con­nected from re­al­ity ev­ery time that you au­to­mat­i­cally find rea­sons to ig­nore ev­i­dence which con­flicts with the way you want the world to be (be­cause it comes from some­one who is ob­vi­ously wrong!); ev­ery time you find ex­cuses to dis­re­gard ob­ser­va­tions (that study was poorly de­signed!); ev­ery time you find rea­sons to stop search­ing for more data as soon as you’ve found the an­swer you like (be­cause what would be the point of wast­ing time by search­ing fur­ther?)

Some­how, tribal so­cial mon­keys have found them­selves in con­trol of part of their world-mod­els. But they don’t feel like they’re con­trol­ling a world-model, they feel like they’re right.

You your­self are part of the path­way be­tween re­al­ity and your map of it, part of a frag­ile link be­tween what is, and what is be­lieved. And if you let your guard down, even for one mo­ment, it is in­cred­ibly easy to flinch and shat­ter that ephemeral cor­re­spon­dence.