Defining causal isomorphism

I pre­vi­ously posted this ques­tion in an­other dis­cus­sion, but it didn’t get any replies so, since I now have enough karma, I’ve de­cided to make it my first “ar­ti­cle”.

This brings up some­thing that has been on my mind for a long time. What are the nec­es­sary and suffi­cient con­di­tions for two com­pu­ta­tions to be (homeo?)mor­phic? This could mean a lot of things, but speci­fi­cally I’d like to cap­ture the no­tion of be­ing able to con­tain a con­scious­ness, so what I’m ask­ing is, what we would have to prove in or­der to say pro­gram A con­tains a con­scious­ness --> pro­gram B con­tains a con­scious­ness. “poin­t­wise” iso­mor­phism, if you’re say­ing what I think, seems too strict. On the other hand, al­low­ing any in­vert­ible func­tion to be a _mor­phism doesn’t seem strict enough. For one thing we can put any re­versible com­pu­ta­tion in 1-1 cor­re­spon­dence with a pro­gram that merely stores a copy of the ini­tial state of the first pro­gram and ticks off the nat­u­ral num­bers. Restrict­ing our func­tions by, say, re­source com­plex­ity, also seems to lead to both similar and un­re­lated is­sues...

Any tak­ers?

• On the other hand, al­low­ing any in­vert­ible func­tion to be a _mor­phism doesn’t seem strict enough. For one thing we can put any re­versible com­pu­ta­tion in 1-1 cor­re­spon­dence with a pro­gram that merely stores a copy of the ini­tial state of the first pro­gram and ticks off the nat­u­ral num­bers.

I don’t un­der­stand why this is a coun­terex­am­ple.

• Nei­ther do I, but my in­tu­ition sug­gests that a static copy of a brain/​the soft­ware nec­es­sary to em­u­late it plus a counter wouldn’t cause that brain to ex­pe­rience con­scious­ness, whereas ac­tu­ally run­ning the simu­la­tion as a re­versible com­pu­ta­tion would...

• Can you provide some more back­ground? What is a mor­phism of com­pu­ta­tions?

• Those are ba­si­cally the two ques­tions I want an­swers to. In the thread I origi­nally posted in, Eliezer refers to “poin­t­wise causal iso­mor­phism”:

Given an ex­tremely-high-re­s­olu­tion em with ver­ified poin­t­wise causal iso­mor­phism (that is, it has been ver­ified >that em­u­lated synap­tic com­part­ments are be­hav­ing like biolog­i­cal synap­tic com­part­ments to the limits of >de­tec­tion) and ver­ified sur­face cor­re­spon­dence (the per­son em­u­lated says they can’t in­ter­nally de­tect any >differ­ence) then my prob­a­bil­ity of con­scious­ness is es­sen­tially “top”, i.e. I would not bother to think about >al­ter­na­tive hy­pothe­ses be­cause the prob­a­bil­ity would be low enough to fall off the radar of things I should think >about. Do you spend a lot of time wor­ry­ing that maybe a brain made out of gold would be con­scious even >though your biolog­i­cal brain isn’t?

We could similarly define a poin­t­wise iso­mor­phism be­tween com­pu­ta­tions A and B. I think I could come up with a for­mal defi­ni­tion, but what I want to know is: un­der what con­di­tions is com­pu­ta­tion A simu­lated by com­pu­ta­tion B, so that if com­pu­ta­tion A is em­u­lat­ing a brain and we all agree that it con­tains a con­scious­ness, we can be sure that B does as well.

• (homeo?)mor­phic?

You prob­a­bly mean ho­mo­mor­phism, un­less you re­ally mean a con­tin­u­ous (in some sense) in­vert­ible trans­for­ma­tion be­tween the two pro­grams.

Any­way, the defi­ni­tion of ho­mo­mor­phism is a “struc­ture-pre­serv­ing map”, so you need to figure out what “struc­ture of con­scious­ness” even means.

To start small, you might want to define the term “struc­ture” for some sim­ple al­gorithm. For ex­am­ple, do two differ­ent pro­grams out­putting first 10 nat­u­ral num­bers have the same struc­ture? What if one prints it and the other uses TTS? Does it mat­ter what lan­guage the num­bers are in? What about two pro­grams, one print­ing first ten num­bers and the other sec­ond ten num­bers? Can you come up with more ex­am­ples?

• What is TTS?

• sorry… text to speech.

• Here is a post on a re­lated ques­tion. As I said there, this pa­per is rele­vant.

• I don’t see the rele­vance of ei­ther of these links.

• Two points of rele­vance that I see are:

If we care about the na­ture of mor­phisms of com­pu­ta­tions only be­cause of some com­pu­ta­tions be­ing peo­ple, the ques­tion is fun­da­men­tally what our con­cept of peo­ple refers to, and if it can re­fer to any­thing at all.

If we view iso­mor­phic as a kind of ex­ten­sion of our naïve view of equals, we can ask what the ap­pro­pri­ate gen­er­al­i­sa­tion is when we dis­cover that equals does not cor­re­spond to re­al­ity and we need a new on­tol­ogy as in the linked pa­per.

• Ac­tu­ally, I started think­ing about com­pu­ta­tions con­tain­ing peo­ple (in this con­text) be­cause I was in­ter­ested in the idea of one com­pu­ta­tion simu­lat­ing an­other, not the other way around. Speci­fi­cally, I started think­ing about this while read­ing Scott Aaron­son’s re­view of Stephen Wolfram’s book. In it, he makes a claim some­thing like: the rule 110 cel­lu­lar au­tomata hasn’t been proved to be tur­ing com­plete be­cause the simu­la­tion has an ex­po­nen­tial slow­down. I’m not sure if the claim was that strong, but definitely it was claimed later by oth­ers that tur­ing com­plete­ness hadn’t been proved for that rea­son. I felt this was wrong, and jus­tified my feel­ing by the thought ex­per­i­ment: sup­pose we had an in­tel­li­gence that was con­tained in a com­puter pro­gram and we simu­lated this pro­gram in rule 110, with the ex­po­nen­tial slow­down. As­sum­ing the origi­nal pro­gram con­tained a con­scious­ness, would the simu­la­tion also? And I felt strongly, and still do, that it would.

It was later shown, If i’m re­mem­ber­ing right, that there was a simu­la­tion with only polyno­mial slow­down, but I still think it’s a use­ful ques­tion to ask, al­though the no­tion it cap­tures, if it does so at all, seems to me to be a slip­pery one.

• Let’s start small. Since we are talk­ing about al­gorithms (bet­ter yet, about pro­grams of a uni­ver­sal Tur­ing ma­chine), what about if two pro­grams can match the same in­put on the same out­put?
Would that suffice as a defi­ni­tion of iso­mor­phism, even if they have wildly differ­ent re­source us­ages (in­clud­ing time)?

• What if they don’t out­put any­thing?

• Sadly, even that is un­prov­able—I be­lieve there’s a way to con­vert a solu­tion for this prob­lem to one for the halt­ing prob­lem.

Of course you can still do it for a large frac­tion of func­tions you’re likely to see in real life.

• Yes—It’s not pos­si­ble to de­cide, given two pro­grams, if they have iden­ti­cal be­hav­ior. I think that’s okay here—the origi­nal poster asked for a defi­ni­tion of equiv­alence, not for a defi­ni­tion that was always de­cid­able.

• I don’t think I’ll ever figure this sort of prob­lem out in my life­time. If I were look­ing for a place to start, I’d look at the Kol­mogorov com­plex­ity of the trans­for­ma­tion.