I like simplicity, but not THAT much

Fol­lowup to: L-zom­bies! (L-zom­bies?)
Re­ply to: Coscott’s Prefer­ences with­out Ex­is­tence; Paul Chris­ti­ano’s com­ment on my l-zom­bies post

In my pre­vi­ous post, I in­tro­duced the idea of an “l-zom­bie”, or log­i­cal philo­soph­i­cal zom­bie: A Tur­ing ma­chine that would simu­late a con­scious hu­man be­ing if it were run, but that is never run in the real, phys­i­cal world, so that the ex­pe­riences that this hu­man would have had, if the Tur­ing ma­chine were run, aren’t ac­tu­ally con­sciously ex­pe­rienced.

One com­mon re­ply to this is to deny the pos­si­bil­ity of log­i­cal philo­soph­i­cal zom­bies just like the pos­si­bil­ity of phys­i­cal philo­soph­i­cal zom­bies: to say that ev­ery math­e­mat­i­cally pos­si­ble con­scious ex­pe­rience is in fact con­sciously ex­pe­rienced, and that there is no kind of “mag­i­cal re­al­ity fluid” that makes some of these be ex­pe­rienced “more” than oth­ers. In other words, we live in the Teg­mark Level IV uni­verse, ex­cept that un­like Teg­mark ar­gues in his pa­per, there’s no ob­jec­tive mea­sure on the col­lec­tion of all math­e­mat­i­cal struc­tures, ac­cord­ing to which some math­e­mat­i­cal struc­tures some­how “ex­ist more” than oth­ers (and, al­though IIRC that’s not part of Teg­mark’s ar­gu­ment, ac­cord­ing to which the con­scious ex­pe­riences in some math­e­mat­i­cal struc­tures could be “ex­pe­rienced more” than those in other struc­tures). All math­e­mat­i­cally pos­si­ble ex­pe­riences are ex­pe­rienced, and to the same “de­gree”.

So why is our world so or­derly? There’s a math­e­mat­i­cally pos­si­ble con­tinu­a­tion of the world that you seem to be liv­ing in, where pur­ple pump­kins are about to start fal­ling from the sky. Or the light we ob­serve com­ing in from out­side our galaxy is sud­denly re­placed by white noise. Why don’t you re­mem­ber ever see­ing any­thing as ob­vi­ously di­s­or­derly as that?

And the an­swer to that, of course, is that among all the pos­si­ble ex­pe­riences that get ex­pe­rienced in this mul­ti­verse, there are or­derly ones as well as non-or­derly ones, so the fact that you hap­pen to have or­derly ex­pe­riences isn’t in con­flict with the hy­poth­e­sis; af­ter all, the or­derly ex­pe­riences have to be ex­pe­rienced as well.

One might be tempted to ar­gue that it’s some­how more likely that you will ob­serve an or­derly world if ev­ery­body who has con­scious ex­pe­riences at all, or if at least most con­scious ob­servers, see an or­derly world. (The “most ob­servers” ver­sion of the ar­gu­ment as­sumes that there is a mea­sure on the con­scious ob­servers, a.k.a. some kind of mag­i­cal re­al­ity fluid.) But this re­quires the use of an­thropic prob­a­bil­ities, and there is sim­ply no (known) sys­tem of an­thropic prob­a­bil­ities that gives rea­son­able an­swers in gen­eral. For­tu­nately, we have an al­ter­na­tive: Wei Dai’s up­date­less de­ci­sion the­ory (which was mo­ti­vated in part ex­actly by the prob­lem of how to act in this kind of mul­ti­verse). The ba­sic idea is sim­ple (though the de­tails do con­tain dev­ils): We have a prior over what the world looks like; we have some prefer­ences about what we would like the world to look like; and we come up with a plan for what we should do in any cir­cum­stance we might find our­selves in that max­i­mizes our ex­pected util­ity, given our prior.

*

In this frame­work, Coscott and Paul sug­gest, ev­ery­thing adds up to nor­mal­ity if, in­stead of say­ing that some ex­pe­riences ob­jec­tively ex­ist more, we hap­pen to care more about some ex­pe­riences than about oth­ers. (That’s not a new idea, of course, or the first time this has ap­peared on LW—for ex­am­ple, Wei Dai’s What are prob­a­bil­ities, any­way? comes to mind.) In par­tic­u­lar, sup­pose we just care more about ex­pe­riences in math­e­mat­i­cally re­ally sim­ple wor­lds—or more pre­cisely, places in math­e­mat­i­cally sim­ple wor­lds that are math­e­mat­i­cally sim­ple to de­scribe (since there’s a sim­ple pro­gram that runs all Tur­ing ma­chines, and there­fore all math­e­mat­i­cally pos­si­ble hu­man ex­pe­riences, always as­sum­ing that hu­man brains are com­putable). Then, even though there’s a ver­sion of you that’s about to see pur­ple pump­kins rain from the sky, you act in a way that’s best in the world where that doesn’t hap­pen, be­cause that world has so much lower K-com­plex­ity, and be­cause you there­fore care so much more about what hap­pens in that world.

There’s some­thing un­set­tling about that, which I think de­serves to be men­tioned, even though I do not think it’s a good coun­ter­ar­gu­ment to this view. This un­set­tling thing is that on pri­ors, it’s very un­likely that the world you ex­pe­rience arises from a re­ally sim­ple math­e­mat­i­cal de­scrip­tion. (This is a ver­sion of a point I also made in my pre­vi­ous post.) Even if the physi­cists had already figured out the sim­ple The­ory of Every­thing, which is a su­per-sim­ple cel­lu­lar au­toma­ton that ac­cords re­ally well with ex­per­i­ments, you don’t know that this sim­ple cel­lu­lar au­toma­ton, if you ran it, would re­ally pro­duce you. After all, imag­ine that some­body in­ter­vened in Earth’s his­tory so that or­chids never evolved, but oth­er­wise left the laws of physics the same; there might still be hu­mans, or some­thing like hu­mans, and they would still run ex­per­i­ments and find that they match the pre­dic­tions of the sim­ple cel­lu­lar au­toma­ton, so they would as­sume that if you ran that cel­lu­lar au­toma­ton, it would com­pute them—ex­cept it wouldn’t, it would com­pute us, with or­chids and all. Un­less, of course, it does com­pute them, and a spe­cial in­ter­ven­tion is re­quired to get the or­chids.

So you don’t know that you live in a sim­ple world. But, goes the ob­vi­ous re­ply, you care much more about what hap­pens if you do hap­pen to live in the sim­ple world. On pri­ors, it’s prob­a­bly not true; but it’s best, ac­cord­ing to your val­ues, if all peo­ple like you act as if they live in the sim­ple world (un­less they’re in a coun­ter­fac­tual mug­ging type of situ­a­tion, where they can in­fluence what hap­pens in the sim­ple world even if they’re not in the sim­ple world them­selves), be­cause if the ac­tual peo­ple in the sim­ple world act like that, that gives the high­est util­ity.

You can adapt an ar­gu­ment that I was mak­ing in my l-zom­bies post to this set­ting: Given these prefer­ences, it’s fine for ev­ery­body to be­lieve that they’re in a sim­ple world, be­cause this will in­crease the cor­re­spon­dence be­tween map and ter­ri­tory for the peo­ple that do live in sim­ple wor­lds, and that’s who you care most about.

*

I mostly agree with this rea­son­ing. I agree that Teg­mark IV with­out a mea­sure seems like the most ob­vi­ous and rea­son­able hy­poth­e­sis about what the world looks like. I agree that there seems no rea­son for there to be a “mag­i­cal re­al­ity fluid”. I agree, there­fore, that on the pri­ors that I’d put into my UDT calcu­la­tion for how I should act, it’s much more likely that true re­al­ity is a mea­sure­less Teg­mark IV than that it has some ob­jec­tive mea­sure ac­cord­ing to which some ex­pe­riences are “ex­pe­rienced less” than oth­ers, or not ex­pe­rienced at all. I don’t think I un­der­stand things well enough to be ex­tremely con­fi­dent in this, but my odds would cer­tainly be in fa­vor of it.

More­over, I agree that if this is the case, then my prefer­ences are to care more about the sim­pler wor­lds, mak­ing things add up to nor­mal­ity; I’d want to act as if pur­ple pump­kins are not about to start fal­ling from the sky, pre­cisely be­cause I care more about the con­se­quences my ac­tions have in more or­derly wor­lds.

But.

*

Imag­ine this: Once you finish read­ing this ar­ti­cle, you hear a bell ring­ing, and then a sonorous voice an­nounces: “You do in­deed live in a Teg­mark IV mul­ti­verse with­out a mea­sure. You had bet­ter deal with it.” And then it turns out that it’s not just you who’s heard that voice: Every sin­gle hu­man be­ing on the planet (who didn’t sleep through it, isn’t deaf etc.) has heard those same words.

On the hy­poth­e­sis, this is of course about to hap­pen to you, though only in one of those wor­lds with high K-com­plex­ity that you don’t care about very much.

So let’s con­sider the fol­low­ing pos­si­ble plan of ac­tion: You could act as if there is some differ­ence be­tween “ex­is­tence” and “non-ex­is­tence”, or per­haps some graded de­gree of ex­is­tence, un­til you hear those words and con­firm that ev­ery­body else has heard them as well, or un­til you’ve ex­pe­rienced one similarly ob­vi­ously “di­s­or­derly” event. So un­til that hap­pens, you do things like in­vest time and en­ergy into try­ing to figure out what the best way to act is if it turns out that there is some mag­i­cal re­al­ity fluid, and into try­ing to figure out what a non-con­fused ver­sion of some­thing like a mea­sure on con­scious ex­pe­rience could look like, and you act in ways that don’t kill you if we hap­pen to not live in a mea­sure­less Teg­mark IV. But once you’ve had a di­s­or­derly ex­pe­rience, just a sin­gle one, you switch over to op­ti­miz­ing for the mea­sure­less math­e­mat­i­cal mul­ti­verse.

If the de­gree to which you care about wor­lds is re­ally pro­por­tional to their K-com­plex­ity, with re­spect to what you and I would con­sider a “sim­ple” uni­ver­sal Tur­ing ma­chine, then this would be a silly plan; there is very lit­tle to be gained from be­ing right in wor­lds that have that much higher K-com­plex­ity. But when I query my in­tu­itions, it seems like a rather good plan:

  • Yes, I care less about those di­s­or­derly wor­lds. But not as much less as if I val­ued them by their K-com­plex­ity. I seem to be will­ing to tap into my com­plex hu­man in­tu­itions to re­fer to the no­tion of “sin­gle ob­vi­ously di­s­or­derly event”, and as­sign the wor­lds with a sin­gle such event, and oth­er­wise low K-com­plex­ity, not that much lower im­por­tance than the wor­lds with ac­tual low K-com­plex­ity.

  • And if I imag­ine that the con­fused-seem­ing no­tions of “re­ally phys­i­cally ex­ists” and “ac­tu­ally ex­pe­rienced” do have some ob­jec­tive mean­ing in­de­pen­dent of my prefer­ences, then I care much more about the differ­ence be­tween “I get to ‘ac­tu­ally ex­pe­rience’ a to­mor­row” and “I ‘re­ally phys­i­cally’ get hit by a car to­day” than I care about the differ­ence be­tween the world with true low K-com­plex­ity and the wor­lds with a sin­gle di­s­or­derly event.

In other words, I agree that on the pri­ors I put into my UDT calcu­la­tion, it’s much more likely that we live in mea­sure­less Teg­mark IV; but my con­fi­dence in this isn’t ex­treme, and if we don’t, then the differ­ence be­tween “ex­ists” and “doesn’t ex­ist” (or “is ex­pe­rienced a lot” and “is ex­pe­rienced only in­finites­i­mally”) is very im­por­tant; much more im­por­tant than the differ­ence be­tween “sim­ple world” and “sim­ple world plus one di­s­or­derly event” ac­cord­ing to my prefer­ences if we do live in a Teg­mark IV uni­verse. If I act op­ti­mally ac­cord­ing to the Teg­mark IV hy­poth­e­sis in the lat­ter wor­lds, that still gives me most of the util­ity that act­ing op­ti­mally in the truly sim­ple wor­lds would give me—or, more pre­cisely, the util­ity differ­en­tial isn’t nearly as large as if there is some­thing else go­ing on, and I should be do­ing some­thing about it, and I’m not.

This is the rea­son why I’m try­ing to think se­ri­ously about things like l-zom­bies and mag­i­cal re­al­ity fluid. I mean, I don’t even think that these are par­tic­u­larly likely to be ex­actly right even if the mea­sure­less Teg­mark IV hy­poth­e­sis is wrong; I ex­pect that there would be some new in­sight that makes even more sense than Teg­mark IV, and makes all the con­fu­sion go away. But try­ing to grap­ple with the con­fused in­tu­itions we cur­rently have seems at least a pos­si­ble way to make progress on this, if it should be the case that there is in fact progress to be made.

*

Here’s one av­enue of in­ves­ti­ga­tion that seems worth­while to me, and wouldn’t with­out the above ar­gu­ment. One thing I could imag­ine find­ing, that could make the con­fu­sion go away, would be that the in­tu­itive no­tion of “all pos­si­ble Tur­ing ma­chines” is just wrong, and leads to out­right con­tra­dic­tions (e.g., to in­con­sis­ten­cies in Peano Arith­metic, or some­thing similarly con­vinc­ing). Lots of peo­ple have en­ter­tained the idea that con­cepts like the real num­bers don’t “re­ally” ex­ist, and only the be­hav­ior of com­putable func­tions is “real”; per­haps not even that is real, and true re­al­ity is more re­stricted? (You can rein­ter­pret many re­sults about real num­bers as re­sults about com­putable func­tions, so maybe you could rein­ter­pret re­sults about com­putable func­tions as re­sults about these hy­po­thet­i­cal weaker ob­jects that would ac­tu­ally make math­e­mat­i­cal sense.) So it wouldn’t be the case af­ter all that there is some Tur­ing ma­chine that com­putes the con­scious ex­pe­riences you would have if pump­kins started fal­ling from the sky.

Does the above make sense? Prob­a­bly not. But I’d say that there’s a small chance that maybe yes, and that if we un­der­stood the right kind of math, it would seem very ob­vi­ous that not all in­tu­itively pos­si­ble hu­man ex­pe­riences are ac­tu­ally math­e­mat­i­cally pos­si­ble (just as ob­vi­ous as it is to­day, with hind­sight, that there is no Tur­ing ma­chine which takes a pro­gram as in­put and out­puts whether this pro­gram halts). More­over, it seems plau­si­ble that this could have con­se­quences for how we should act. This, to­gether with my ar­gu­ment above, make me think that this sort of thing is worth in­ves­ti­gat­ing—even if my pri­ors are heav­ily on the side of ex­pect­ing that all ex­pe­riences ex­ist to the same de­gree, and or­di­nar­ily this differ­ence in prob­a­bil­ities would make me think that our time would be bet­ter spent on in­ves­ti­gat­ing other, more likely hy­pothe­ses.

*

Leav­ing aside the ques­tion of how I should act, though, does all of this mean that I should be­lieve that I live in a uni­verse with l-zom­bies and mag­i­cal re­al­ity fluid, un­til such time as I hear that voice speak­ing to me?

I do feel tempted to try to in­voke my ar­gu­ment from the l-zom­bies post that I pre­fer the map-ter­ri­tory cor­re­spon­dences of ac­tu­ally ex­ist­ing hu­mans to be cor­rect, and don’t care about whether l-zom­bies have their map match up with the ter­ri­tory. But I’m not sure that I care much more about ac­tu­ally ex­ist­ing hu­mans be­ing cor­rect, if the mea­sure­less math­e­mat­i­cal mul­ti­verse hy­poth­e­sis is wrong, than I care about hu­mans in sim­ple wor­lds be­ing cor­rect, if that hy­poth­e­sis is right. So I think that the right thing to do may be to have a sub­jec­tive be­lief that I most likely do live in the mea­sure­less Teg­mark IV, as long as that’s the view that seems by far the least con­fused—but con­tinue to spend re­sources on in­ves­ti­gat­ing al­ter­na­tives, be­cause on pri­ors they don’t seem suffi­ciently un­likely to make up for the po­ten­tial great im­por­tance of get­ting this right.