Reflective AIXI and Anthropics

It’s pos­si­ble to define a ver­sion of Solomonoff In­duc­tion with Reflec­tive Or­a­cles, that al­lows an AIXI-like agent to con­sider hy­pothe­ses that in­clude it­self or other equally pow­er­ful agents, go­ing part­way to­wards ad­dress­ing nat­u­ral­ized in­duc­tion is­sues.

So then a nat­u­ral ques­tion is “what does this par­tial an­swer seem to point to for an­throp­ics?”

To figure this out, we’ll be go­ing over a few of the thought ex­per­i­ments in Bostrom’s book about an­thropic rea­son­ing, and see­ing what Reflec­tive-Or­a­cle AIXI has to say about them.

The fol­low­ing con­clu­sions are very de­pen­dent on how many ex­tra bits it takes to en­code “same en­vi­ron­ment, but I’m that other agent over there”, so I’ll be mak­ing a lot of as­sump­tions that I can’t prove, such as the most effi­cient way of en­cod­ing an en­vi­ron­ment be­ing to spec­ify an en­vi­ron­ment, and then spec­i­fy­ing a place in there that the agent in­ter­faces with. This seems un­avoid­able so far, so I’ll at least make an effort to list out all the im­plicit as­sump­tions that go into set­ting up the prob­lems.

As a quick re­fresher, SSA (self-se­lec­tion as­sump­tion) and SIA (self-in­di­ca­tion as­sump­tion) work as fol­lows: SSA takes the prob­a­bil­ity of a world as given and evenly dis­tributes prob­a­bil­ity mass to be­ing ev­ery­thing in “your refer­ence class” in that par­tic­u­lar world. SIA reweights the prob­a­bil­ity of a world by the num­ber of in­stances of “things in your refer­ence class” that it con­tains. In short, SIA has a strong bias in fa­vor of pos­si­ble wor­lds/​hy­pothe­ses/​tur­ing ma­chines with many in­stances of you, while SSA doesn’t care about how many in­stances of you are pre­sent in a pos­si­ble world.

Thought Ex­per­i­ment 1: Incubator

Stage (a): In an oth­er­wise empty world, a ma­chine called “the in­cu­ba­tor” kicks into ac­tion. It starts by toss­ing a fair coin. If the coin falls tails then it cre­ates one room and a man with a black beard in­side it. If the coin falls heads then it cre­ates two rooms, one with a black­bearded man and one with a white-bearded man. As the rooms are com­pletely dark, no­body knows his beard color. Every­body who’s been cre­ated is in­formed about all of the above. You find your­self in one of the rooms. Ques­tion: What should be your cre­dence that the coin fell tails?
Stage (b): A lit­tle later, the lights are switched on, and you dis­cover that you have a black beard. Ques­tion: What should your cre­dence in Tails be now?

This will be mod­eled as a ma­chine that rep­re­sents the en­vi­ron­ment, that has a bit that is used to de­ter­mine how the coin­flip comes up. Also, in the sec­ond case, be­cause there are two pos­si­ble places where the agent can be hooked up to the en­vi­ron­ment, an­other bit is re­quired to spec­ify where the agent is “at­tached” to the en­vi­ron­ment. Th­ese three cases have min­i­mum de­scrip­tion lengths of , , and bits re­spec­tively (where is the de­scrip­tion length of the en­vi­ron­ment), so by the uni­ver­sal semimea­sure, they have (rel­a­tive) prob­a­bil­ity mass of 50%, 25% and 25% re­spec­tively.

So, as­sum­ing the prob­lem setup ac­tu­ally works this way, the an­swers are 50% and 67%, re­spec­tively. This seems to point to­wards Reflec­tive-Or­a­cle Solomonoff In­duc­tion (RO-SI) do­ing some­thing like SSA. The in­tu­itive rea­son why, is be­cause a hy­poth­e­sis with a bunch of copies of you re­quires a bunch of ex­tra bits to spec­ify which copy of you the in­put data stream is com­ing from, and this can­cels out with the in­creased num­ber of hy­pothe­ses where you are in the well-pop­u­lated world. There may be copies of you in a “world”, but be­cause it re­quires 50 bits to spec­ify “I’m that copy right there”, each spe­cific hy­poth­e­sis/​Tur­ing ma­chine of the form “I’m in that world and am also that par­tic­u­lar copy” re­quires 50 ex­tra bits to spec­ify where in the en­vi­ron­ment the data is be­ing read out from, and re­ceives a prob­a­bil­ity penalty of , which, when mul­ti­plied by the large num­ber of hy­pothe­ses of that form, re­cov­ers nor­mal­ity.

There are two ways where things get more in­ter­est­ing. One is that, for en­vi­ron­ments with many ob­servers in your refer­ence class (RO-SI uses as its refer­ence-class all spots in the en­vi­ron­ment that re­ceive the ex­act same ob­ser­va­tion string as is pre­sent in its mem­ory), you’ll as­sign much higher prob­a­bil­ity to be­ing one of the (fairly few) ob­servers for which spec­i­fy­ing their spot in the en­vi­ron­ment is low K-com­plex­ity. It definitely isn’t a uniform dis­tri­bu­tion over ob­servers in the pos­si­ble world, it fa­vors ob­servers that are lower-com­plex­ity to spec­ify where in the en­vi­ron­ment they are. A similar effect oc­curs in log­i­cal in­duc­tion, where there tend to be peaks of trad­ing ac­tivity of sim­ple traders, on low-K-com­plex­ity days. Sam’s term for this was “Gra­ham’s crack­pot”, that there could be a sim­ple trader with a lot of ini­tial mass that just bides its time un­til some dis­tant low-K-com­plex­ity day and screws up the prob­a­bil­ities then (it can’t do so in­finitely of­ten, though)

The other point of in­ter­est is what this does on the stan­dard coun­terex­am­ples to SSA.

To be­gin with, the Dooms­day ar­gu­ment is valid for SSA. This doesn’t seem like much of a limi­ta­tion in prac­tice, be­cause RO-SI uses a very re­stric­tive refer­ence class that in most prac­ti­cal cases in­cludes just the agent it­self, and also, be­cause RO-SI is about as pow­er­ful as pos­si­ble when it comes to up­dat­ing on data, the start­ing prior would very very quickly be washed out by a max­i­mally-de­tailed in­side view on the prob­a­bil­ity of ex­tinc­tion us­ing all data that has been ac­quired so far.

Thought Ex­per­i­ment 2: Adam and Eve

Eve and Adam, the first two hu­mans, knew that if they grat­ified their flesh, Eve might bear a child, and if she did, they would be ex­pel­led from Eden and would go on to spawn billions of progeny that would cover the Earth with mis­ery. One day a ser­pent ap­proached the cou­ple and spoke thus: “Pssst! If you em­brace each other, then ei­ther Eve will have a child or she won’t. If she has a child then you will have been among the first two out of billions of peo­ple. Your con­di­tional prob­a­bil­ity of hav­ing such early po­si­tions in the hu­man species given this hy­poth­e­sis is ex­tremely small. If, one the other hand, Eve doesn’t be­come preg­nant then the con­di­tional prob­a­bil­ity, given this, of you be­ing among the first two hu­mans is equal to one. By Bayes’ the­o­rem, the risk that she will have a child is less than one in a billion. Go forth, in­dulge, and worry not about the con­se­quences!”

Here’s where the situ­a­tion gets nifty.

As­sume the en­vi­ron­ment is as fol­lows: There’s the cod­ing of the Tur­ing ma­chine that rep­re­sents the en­vi­ron­ment ( bits), the 1 bit that rep­re­sents “fer­tile or not”, and the bit­string/​ex­tra data that speci­fies where Eve is in the en­vi­ron­ment. ( bits, L for “lo­ca­tion”). Eve has been wan­der­ing around the Gar­den of Eden for a bit, and since she’s a hy­per-pow­er­ful in­duc­tor, she’s ac­cu­mu­lated enough in­for­ma­tion to rule out all the other hy­pothe­ses that say she’s ac­tu­ally not in the Gar­den of Eden. So it’s down to two hy­pothe­ses that are both en­coded by bits, which get equal prob­a­bil­ity. If we as­sume a util­ity func­tion that’s like “+1 re­ward for sex, −10 re­ward for cre­at­ing billions of suffer­ing be­ings” (if it was for an Eve that wasn’t scope-in­sen­si­tive, the ser­pent’s rea­son­ing would fail), the ex­pected util­ity of sex is , and Eve ig­nores the ser­pent.

The spe­cific place that the ser­pent’s rea­son­ing breaks down is as­sum­ing that the prob­a­bil­ity of be­ing Eve/​difficulty of spec­i­fy­ing Eve’s place in the uni­verse goes down/​up when a de­ci­sion is made that re­sults in the world hav­ing a lot more be­ings in it. It doesn’t work that way.

How­ever, it gets more in­ter­est­ing if you as­sume ev­ery­one in the re­sult­ing cre­ated world has sense data such that even a hy­per-pow­er­ful in­duc­tor doesn’t know whether or not they are Eve be­fore the fate­ful de­ci­sion.

Also, as­sume that it takes bits to spec­ify any par­tic­u­lar per­son’s lo­ca­tion if they’re not Eve. This is a sort of “equally dis­tributed prob­a­bil­ity” as­sump­tion on the fu­ture peo­ple, that doesn’t re­strict things that much. Maybe it’s much eas­ier to point to Eve than some other per­son, maybe it’s the other way around.

Also as­sume that ev­ery­one’s util­ity func­tions are like “+1 for sex, −10 for find­ing out shortly af­ter sex that you are one of the suffer­ing fu­ture be­ings, or that you cre­ated billions of such.”

To be­gin with the anal­y­sis, break the hy­poth­e­sis space into:

two wor­lds of bits where Eve is fer­tile/​in­fer­tile, and you are Eve.

and wor­lds of (it de­pends) bits where Eve was fer­tile, sex was had, and you are not Eve. The rea­son why it’s tricky to say what the de­scrip­tion-length of be­ing one of the fu­ture agents is, is be­cause it takes fewer bits to en­code a world where an agent does a thing in ac­cor­dance with the laws of math, than it takes to en­code a world where an agent does a differ­ent thing that they wouldn’t have nor­mally done. In this par­tic­u­lar case, it would take bits (S for surgery) to spec­ify “at this par­tic­u­lar spot, ig­nore what Eve would have done and in­stead sub­sti­tute in the ac­tion “have sex”, and then run things nor­mally”.

So, if Eve definitely has sex, it takes bits to spec­ify one of the fu­ture agents. If Eve definitely doesn’t have sex, it takes bits to spec­ify one of the fu­ture agents.

Tak­ing these two cases, we can rescale things to get a mass of , , and ei­ther or , on the three classes of wor­lds, re­spec­tively. Ex­pected util­ity calcu­la­tions will work out the same way if we use these num­bers in­stead of prob­a­bil­ities that add up to 1, be­cause it’s just a scal­ing on ex­pected util­ity and the scal­ing can be moved over the util­ity func­tion, which is in­var­i­ant un­der scale-and-shift. So then, in the first case, ex­pected util­ity of sex and not-sex be­comes:

So sex will be had if . The crossover point oc­curs ap­prox­i­mately at a 30 bit penalty to spec­ify a non-Eve per­son (and is ap­prox­i­mately 1/​billion.) So, if Eve has sex, and as­signs less than about a 110 chance to be­ing Eve, it’s a con­sis­tent state of af­fairs. The rea­son­ing is “I’m prob­a­bly not Eve, and so I’m prob­a­bly already go­ing to suffer (since I know in ad­vance what my de­ci­sion is in this case), might as well pick up that +1 util­ity”

Re­do­ing this anal­y­sis for the case where Eve doesn’t have sex, we get that sex will be had if , and in this case, the crossover point oc­curs ap­prox­i­mately at a 30 bit penalty to spec­ify both the non-Eve per­son and that par­tic­u­lar de­ci­sion in­ter­ven­tion. (there can also be con­sis­tent solu­tions where the re­flec­tive or­a­cle is perched right on the de­ci­sion thresh­old, and ran­dom­izes ac­cord­ingly, but I’ll ig­nore those for the time be­ing, they don’t change much)

Con­sid­er­ing the spe­cific case where the ra­tios of the prob­a­bil­ity masses for “I’m Eve” and “I’m not Eve” is less than (in the sex case) and (in the non-sex case), we get a case where the de­ci­sion made de­pends on the choice of re­flec­tive or­a­cle! If the re­flec­tive or­a­cle picks sex, sex is the best de­ci­sion (by the rea­son­ing “I’m prob­a­bly not Eve, might as well pick up the +1 util­ity”). If the re­flec­tive or­a­cle picks not-sex, not-sex is the best de­ci­sion (by the rea­son­ing “I’m likely enough to be Eve (be­cause the non-Eve peo­ple live in a lower-prob­a­bil­ity uni­verse where an in­ter­ven­tion on Eve’s ac­tion hap­pened), that I won’t chance it with the coin­flip on fer­til­ity”)

So, RO-AIXI doesn’t ex­actly fail (as SSA is alleged to) in this case, be­cause there’s a flaw in the Ser­pent’s rea­son­ing where the difficulty of spec­i­fy­ing where you are in the uni­verse doesn’t change when you make a de­ci­sion that cre­ates a bunch of other agents, and you don’t think you could be those other agents you’re cre­at­ing.

But if there’s a case where the other agents are sub­jec­tively in­dis­t­in­guish­able from your­self, and it’s bad for you to cre­ate them, but good for them to push the “cre­ate” but­ton, there are mul­ti­ple fixed-points of rea­son­ing that are of the form “I prob­a­bly press the but­ton, I’m prob­a­bly a clone, best to press the but­ton” and “I prob­a­bly don’t press the but­ton, I’m prob­a­bly not a clone, best to not press the but­ton”.

Another in­ter­est­ing an­gle on this is that the choice of ac­tion has a side-effect of al­ter­ing the com­plex­ity of spec­i­fy­ing var­i­ous uni­verses in the first place, and the de­ci­sion rule of RO-AIXI doesn’t take this side-effect into ac­count, it only cares about causal con­se­quences of tak­ing a par­tic­u­lar ac­tion.

The ar­gu­ments of Lazy Adam, Eve’s Card Trick, and UN++ in Bostrom’s book fail to ap­ply to RO-AIXI by a similar line of rea­son­ing.

Sleep­ing Beauty, SSA, and CDT:

There’s a pos­si­ble fu­ture is­sue where, ac­cord­ing to this pa­per, it’s pos­si­ble to money-pump the com­bi­na­tion of SSA and CDT (which RO-AIXI uses), in the Sleep­ing Beauty ex­per­i­ment. Look­ing fur­ther at this is hin­dered by the fact that RO-AIXI im­plic­itly pre­sumes that the agent has ac­cess to the en­tire string of past ob­ser­va­tions that it made, so it doesn’t in­ter­act cleanly with any sort of prob­lem that in­volves am­ne­sia or mem­ory-tam­per­ing. I haven’t yet figured out a way around this, so I’m putting up a 500-dol­lar bounty on an anal­y­sis that man­ages to cram the frame­work of RO-AIXI into prob­lems that in­volve am­ne­sia or mem­ory tam­per­ing (as a pre­limi­nary step to figure out whether the com­bi­na­tion of SSA-like be­hav­ior and CDT gets RO-AIXI into trou­ble by the ar­gu­ment in the afore­men­tioned pa­per).

Take­aways:

RO-AIXI seems to act ac­cord­ing to SSA prob­a­bil­ities, al­though there are sev­eral in­ter­est­ing fea­tures of it. The first is that it as­signs much more prob­a­bil­ity to em­bed­dings of the agent in the en­vi­ron­ment that are low K-com­plex­ity, it definitely doesn’t as­sign equal prob­a­bil­ity to all of them. The sec­ond in­ter­est­ing fea­ture is that the refer­ence class that it uses is “spots in the en­vi­ron­ment that can be in­ter­preted as re­ceiv­ing my ex­act string of in­puts”, the most re­stric­tive one pos­si­ble. This opens the door to weird em­bed­dings like “The etch­ings on that rock, when put through this com­pli­cated func­tion, map onto my own sense data”, but those sorts of things are rather com­plex to spec­ify, so they have fairly low prob­a­bil­ity mass. The third in­ter­est­ing fea­ture is that the prob­a­bil­ity of be­ing a spe­cific agent in the world doesn’t change when you make a de­ci­sion that pro­duces a bunch of ex­tra agents, which de­fuses the usual ob­jec­tions to SSA. The fi­nal in­ter­est­ing fea­ture is that mak­ing a par­tic­u­lar de­ci­sion can af­fect the com­plex­ity of spec­i­fy­ing var­i­ous en­vi­ron­ments, and the stan­dard de­ci­sion pro­ce­dure doesn’t take this effect into ac­count, per­mit­ting mul­ti­ple fixed-points of be­hav­ior.

Also I don’t know how this in­ter­acts with dutch-books on Sleep­ing Beauty be­cause it’s hard to say what RO-AIXI does in cases with am­ne­sia or mem­ory-tam­per­ing, and I’d re­ally like to know and am will­ing to pay 500 dol­lars for an an­swer to that.

No nominations.
No reviews.