[Question] What does Solomonoff induction say about brain duplication/​consciousness?

Back in 2012, in a thread on LW, Carl Shul­man wrote a cou­ple of com­ments con­nect­ing Solomonoff in­duc­tion to brain du­pli­ca­tion, epiphe­nom­e­nal­ism, func­tion­al­ism, David Chalmers’s “psy­chophys­i­cal laws”, and other ideas in con­scious­ness.

The first com­ment says:

It seems that you get similar ques­tions as a nat­u­ral out­growth of sim­ple com­pu­ta­tional mod­els of thought. E.g. if one performs Solomonoff in­duc­tion on the stream of cam­era in­puts to a robot, what kind of short pro­grams will dom­i­nate the prob­a­bil­ity dis­tri­bu­tion over the next in­put? Not just pro­grams that simu­late the physics of our uni­verse: one would also need ad­di­tional code to “read off” the part of the simu­lated uni­verse that cor­re­sponded to the cam­era in­puts. That ad­di­tional code looks like epiphe­nom­e­nal mind-stuff. Us­ing this frame­work you can pose ques­tions like “if the cam­era is ex­pected to be re­built us­ing differ­ent but func­tion­ally equiv­a­lent ma­te­ri­als, will his change the in­puts Solomonoff in­duc­tion pre­dicts?” or “if the cam­era is about to be du­pli­cated, which copy’s in­puts will be pre­dicted by Solomonoff in­duc­tion?”

If we go be­yond Solomonoff in­duc­tion to al­low ac­tions, then you get ques­tions that map pretty well to de­bates about “free will.”

The sec­ond com­ment says:

The code simu­lat­ing a phys­i­cal uni­verse doesn’t need to make any refer­ence to which brain or cam­era in the simu­la­tion is be­ing “read off” to provide the sen­sory in­put stream. The ad­di­tional code takes the simu­la­tion, which is a com­plete pic­ture of the world ac­cord­ing to the laws of physics as they are seen by the crea­tures in the simu­la­tion, and out­puts a sen­sory stream. This func­tion is di­rectly analo­gous to what du­al­ist/​epiphe­nom­e­nal­ist philoso­pher of mind David Chalmers calls “psy­chophys­i­cal laws.”

Carl’s com­ments pose the ques­tions you can ask/​high­light the con­nec­tion, but they don’t an­swer those ques­tions. I would be in­ter­ested in refer­ences to other places dis­cussing this idea, or an­swers to these ques­tions.

Here are some of my own con­fused thoughts (I’m still try­ing to learn al­gorith­mic in­for­ma­tion the­ory, so I would ap­pre­ci­ate hear­ing any cor­rec­tions):

  • If the cam­era is du­pli­cated, it seems to take a longer pro­gram to track/​read off two cam­eras in the phys­i­cal world in­stead of just one, so Solomonoff in­duc­tion would seem to pre­fer a bit se­quence where only one cam­era’s in­puts are visi­ble. How­ever, there seems to be a sym­me­try be­tween the two cam­eras, in that nei­ther one takes a longer pro­gram to track. So right be­fore the cam­era is du­pli­cated, Solomonoff in­duc­tion “knows” that it will be in just one of the cam­eras soon, but doesn’t know which one. If we are us­ing the var­i­ant of Solomonoff in­duc­tion that puts a prob­a­bil­ity dis­tri­bu­tion over se­quences, then the prob­a­bil­ity mass will split in half be­tween the two copies’ views. Is this right? If we are us­ing the var­i­ant of Solomonoff in­duc­tion that just prints out bits, then I don’t know what hap­pens; does it just flip a coin to de­cide which cam­era’s view to print?

  • If the cam­era is re­built us­ing differ­ent but func­tion­ally equiv­a­lent ma­te­ri­als, I guess it would be pos­si­ble to track the in­puts to the new cam­era, but wouldn’t it be even sim­pler to stop track­ing the phys­i­cal world al­to­gether (and just re­turn a blank cam­era view)?

  • In gen­eral, any time any­thing “dras­tic” hap­pens to the cam­era (like a rock fal­ling on it), it seems like one of the short­est pro­grams is to just as­sume the cam­era stops work­ing (or does some­thing else that’s weird-but-sim­ple, like just “read­ing off” a fixed lo­ca­tion with­out mo­tion). But I’m not sure how to clas­sify what counts as “dras­tic”.