# jacobt comments on Should logical probabilities be updateless too?

• I think CM with a log­i­cal coin is not well-defined. Say Omega de­ter­mines whether or not the mil­lionth digit of pi is even. If it’s even, you ver­ify this and then Omega asks you to pay \$1000; if it’s odd Omega gives you \$1000000 iff. you would have paid Omega had the mil­lionth digit of pi been even. But the coun­ter­fac­tual “would you have paid Omega had the mil­lionth digit of pi been even and you ver­ified this” is un­defined if the digit is in fact odd, since you would have re­al­ized that it is odd dur­ing ver­ifi­ca­tion. If you don’t ac­tu­ally ver­ify it, then the prob­lem is well-defined be­cause Omega can just lie to you. I guess you could ask the coun­ter­fac­tual “what if your digit ver­ifi­ca­tion pro­ce­dure malfunc­tioned and said the digit was even”, but now we’re get­ting into doubt­ing your own men­tal fac­ul­ties.

• Per­haps I am miss­ing the ob­vi­ous, but why is this a hard prob­lem? So our pro­tag­o­nist AI has some al­gorithm to de­ter­mine if the mil­lionth digit of pi is odd- he can­not run it yet, but he has it. Lets call that func­tion f{}, that re­turns a 1 if the digit is odd, or a 0 if it is even. He also has some other func­tion like: sub pay_or_no { if (f{}) { pay(1000); }

In this fash­ion, Omega can ver­ify the al­gorithm that re­turns the mil­lionth digit of pi, in­de­pen­dently ver­ify the al­gorithm that pays based on that re­turn, and our pro­tag­o­nist gets his money.

• !!!!

This seems to be the cor­rect an­swer to ja­cobt’s ques­tion. The key is look­ing at the length of proofs. The gen­eral rule should go like this: when you’re try­ing to de­cide which of two im­pos­si­ble coun­ter­fac­tu­als “a()=x im­plies b()=y” and “a()=x im­plies b()=z” is more true even though “a()=x” is false, go with the one that has the shorter proof. We already use that rule when im­ple­ment­ing agents that com­pute coun­ter­fac­tu­als about their own ac­tions. Now we can just im­ple­ment Omega us­ing the same rule. If the mil­lionth digit of pi is in fact odd, but the state­ment “mil­lionth digit of pi is even ⇒ agent pays up” has a much shorter proof than “mil­lionth digit of pi is even ⇒ agent doesn’t pay up”, Omega should think that the agent would pay up.

The idea seems so ob­vi­ous in ret­ro­spect, I don’t un­der­stand how I missed it. Thanks!

• If the mil­lionth digit of pi is in fact odd, but the state­ment “mil­lionth digit of pi is even ⇒ agent pays up” has a much shorter proof than “mil­lionth digit of pi is even ⇒ agent doesn’t pay up”, Omega should think that the agent would pay up.

This seems equiv­a­lent to:

has a much shorter proof than “mil­lionth digit of pi is odd”

But does that make sense? What if it were pos­si­ble to have re­ally short proofs of whether the n-th digit of pi is even or odd and it’s im­pos­si­ble for the agent to ar­range to have a shorter proof of “mil­lionth digit of pi is even ⇒ agent pays up”? Why should the agent be pe­nal­ized for that?

• Maybe the whole point of a log­i­cal coin­flip is about be­ing harder to prove than sim­ple state­ments about the agent. If the coin­flip were sim­ple com­pared the the agent, like “1!=1”, then a CDT agent would not have pre­com­mit­ted to co­op­er­ate, be­cause the agent would have figured out in ad­vance that 1=1. So it’s not clear that a UDT agent should co­op­er­ate ei­ther.

• I agree, this seems like a rea­son­able way of defin­ing de­pen­den­cies be­tween con­stant sym­bols. In case of log­i­cal un­cer­tainty, I think you’d want to look into how rel­a­tive lengths of proofs de­pend on adding more the­o­rems as ax­ioms (so that they don’t cost any proof length to use). This way, differ­ent agents or an agent in differ­ent situ­a­tions would have differ­ent ideas about which de­pen­den­cies are nat­u­ral.

This goes all the way back to try­ing to define de­pen­den­cies by anal­ogy with AIXI/​K-com­plex­ity, I think we were talk­ing about this on the list in spring 2011.

• Good point, thanks. You’re right that even-world looks just as im­pos­si­ble from odd-world’s POV as odd-world looks from even-world, so Omega also needs to com­pute im­pos­si­ble coun­ter­fac­tu­als when de­cid­ing whether to give you the mil­lion. The challenge of solv­ing the prob­lem now looks very similar to the challenge of for­mu­lat­ing the prob­lem in the first place :-)

• I pointed out the same is­sue be­fore, but it doesn’t seem to af­fect my bar­gain­ing prob­lem.

• Why not? It seems to me that to de­ter­mine that the sta­ple max­i­mizer’s offer is fair, you need to look at the sta­ple max­i­mizer’s as­sess­ment of you in the im­pos­si­ble world where it gets con­trol. That’s very similar to look­ing at Omega’s as­sess­ment of you in the im­pos­si­ble world where it’s de­cid­ing whether to give you the mil­lion. Or maybe I’m wrong, all this re­cur­sion is mak­ing me con­fused...

• What I meant is, in my ver­sion of the prob­lem, you don’t have to solve the prob­lem (say what Omega does ex­actly) in or­der to for­mu­late the prob­lem, since “the sta­ple max­i­mizer’s as­sess­ment of you in the im­pos­si­ble world where it gets con­trol” is part of the solu­tion, not part of the prob­lem speci­fi­ca­tion.