# ike

Karma: 612
Page 1
• All prov­able state­ments fol­low from the ax­ioms, in­clud­ing “P or not P” for any par­tic­u­lar P. It’s the same sense as any other state­ment can be prov­able.

• Note that you can prove “P or not P”.

• How does it im­ply that?

I have in­tu­itions on both sides. The in­tu­ition against is that pre­dict­ing the out­come of a pro­cess can be done with­out hav­ing any­thing iso­mor­phic to in­di­vi­d­ual steps in that pro­cess—it seems plau­si­ble (or at the very least, pos­si­ble and co­her­ent) for hu­mans to be pre­dictable, even perfectly, with­out hav­ing some­thing iso­mor­phic to a hu­man. But a perfect pre­dic­tor would count as an ar­bi­trar­ily ac­cu­rate simu­la­tion.

• Causal­ity is differ­ent, for one. You in re­al­ity has a causal struc­ture where fu­ture ac­tions are caused by the state of you in the pre­sent + some in­puts. You in the simu­la­tion has a causal struc­ture where ac­tions are caused by the simu­la­tor, to some ex­tent.

I’m not re­ally as­sum­ing that. My ques­tion is if there’s a co­her­ent po­si­tion where hu­mans are con­scious, p-zom­bie hu­mans are im­pos­si­ble, but simu­la­tions can be high fidelity yet not con­scious.

I’m not ask­ing if it’s true, just whether the stan­dard ar­gu­ment against p-zom­bies rules this out as well.

• But ob­vi­ously you as a simu­la­tion is differ­ent in some as­pects from you in re­al­ity. It’s not ob­vi­ous that the ar­gu­ment caries over.

• Does the anti-p-zom­bie ar­gu­ment im­ply you can’t simu­late hu­mans past some level of fidelity with­out pro­duc­ing qualia/​con­scious­ness?

Or is there a co­her­ent po­si­tion whereby p-zom­bies are im­pos­si­ble but ar­bi­trar­ily ac­cu­rate simu­la­tions that aren’t con­scious are pos­si­ble?

# ike’s Shortform

1 Sep 2019 18:48 UTC
5 points
• Stop by your lo­cal col­lege, lo­cate the rele­vant de­part­ment, and ask around.

• Lud­dites and com­mu­nist move­ments in coun­tries that didn’t adopt com­mu­nism come to mind

• Look­ing at my own ex­pe­rience, the thing that mo­ti­vated me to do things likely to fail is the ex­pec­ta­tion of get­ting other benefits, even if it failed. One such thing is “ex­pe­rience”, but it could also be “it’ll be fun” or “at­tempt­ing will give you sta­tus even if you fail” or any num­ber of other things.

Or, if there is feed­back af­ter rel­a­tively lit­tle effort (you find out af­ter the first few chap­ters if peo­ple like it).

There’s just some­thing about “work hard for an ex­tended pe­riod of time with no feed­back un­til you find out if you won, which is a bi­nary event with low odds” that turns peo­ple off, I guess.

• I iden­ti­fied one pa­per, and it cites an­other that also claims this is flawed. Don’t see a rea­son to be­lieve the origi­nal pa­per over those

• Feels like there has to be some­thing wrong with the pa­per. I don’t have the knowl­edge to an­a­lyze it my­self, but I read through the pa­per un­til the meth­ods sec­tion and they don’t dis­cuss much be­yond the math. It’s un­clear to me how they’re ar­riv­ing at a con­clu­sion where differ­ent things hap­pened from differ­ent per­spec­tives, and par­tic­u­larly what per­cent of the time that would hap­pen.

If some­one fa­mil­iar with the math could ex­plain what the prob­a­bil­ity of each step is I think it could be a lot sim­pler to fol­low.

• It’s not just the one post, it’s the whole se­quence of re­lated posts.

It’s hard for me to sum­ma­rize it all and do it jus­tice, but it dis­agrees with the way you’re fram­ing this. I would sug­gest you read some of that se­quence and/​or some of the de­ci­sion the­ory pa­pers for a defense of “should” no­tions be­ing used even when be­liev­ing in a de­ter­minis­tic world, which you re­ject. I don’t re­ally want to ar­gue the whole thing from scratch, but that is where our dis­agree­ment would lie.

• If they can perfectly pre­dict your ac­tions, then you have no choice, so talk­ing about which choice to make is mean­ingless.

This was ar­gued against in the Se­quences and in gen­eral, doesn’t seem to be a strong ar­gu­ment. It seems perfectly com­pat­i­ble to be­lieve your ac­tions fol­low de­ter­minis­ti­cally and still talk about de­ci­sion the­ory—all the func­tional de­ci­sion the­ory stuff is as­sum­ing a de­ter­minis­tic de­ci­sion pro­cess, I think.

Re QM: some­times I’ve seen it stipu­lated that the world in which the sce­nario hap­pens is de­ter­minis­tic. It’s en­tirely pos­si­ble that the amount of noise gen­er­ated by QM isn’t enough to af­fect your choice (be­sides for a very un­likely “your brain has a cou­ple bits changed ran­domly in ex­actly the right way to change your choice”, but that should be way too many or­ders of mag­ni­tude un­likely so as to not mat­ter in any ex­pected util­ity calcu­la­tion).