My Fundamental Question About Omega

Omega has ap­peared to us in­side of puz­zles, games, and ques­tions. The ba­sic con­cept be­hind Omega is that it is (a) a perfect pre­dic­tor and (b) not malev­olent. The prac­ti­cal im­pli­ca­tions be­hind these points are that (a) it doesn’t make mis­takes and (b) you can trust its mo­tives in the sense that it re­ally, hon­estly doesn’t care about you. This bug­ger is True Neu­tral and is good at it. And it doesn’t lie.

A quick peek at Omega’s pres­ence on LessWrong re­veals New­comb’s prob­lem and Coun­ter­fac­tual Mug­ging as the most promi­nent ex­am­ples. For those that missed them, other ar­ti­cles in­clude Bead Jars and The Lifes­pan Dilemma.

Coun­ter­fac­tual Mug­ging was the most an­noy­ing for me, how­ever, be­cause I thought the an­swer was com­pletely ob­vi­ous and ap­par­ently the an­swer isn’t ob­vi­ous. In­stead of go­ing around in cir­cles with a com­pli­cated sce­nario I de­cided to find a sim­pler ver­sion that re­veals what I con­sider to my the fun­da­men­tal con­fu­sion about Omega.

Sup­pose that Omega, as defined above, ap­pears be­fore you and says that it pre­dicted you will give it $5. What do you do? If Omega is a perfect pre­dic­tor, and it pre­dicted you will give it $5… will you give it $5 dol­lars?

The an­swer to this ques­tion is prob­a­bly ob­vi­ous but I am cu­ri­ous if we all end up with the same ob­vi­ous an­swer.

The fun­da­men­tal prob­lem be­hind Omega is how to re­solve a claim by a perfect pre­dic­tor that in­cludes a de­ci­sion you and you alone are re­spon­si­ble for mak­ing. This in­vokes all sorts of as­sump­tions about choice and free-will, but in terms of phras­ing the ques­tion these as­sump­tions do not mat­ter. I care about how you will act. What ac­tion will you take? How­ever you la­bel the source of these ac­tions is your pre­rog­a­tive. The ques­tion doesn’t care how you got there; it cares about the an­swer.

My an­swer is that you will give Omega $5. If you don’t, Omega wouldn’t have made the pre­dic­tion. If Omega made the pre­dic­tion AND you don’t give $5 than the defi­ni­tion of Omega is flawed and we have to re­define Omega.

A pos­si­ble ob­jec­tion to the sce­nario is that the pre­dic­tion it­self is im­pos­si­ble to make. If Omega is a perfect pre­dic­tor it fol­lows that it would never make an im­pos­si­ble pre­dic­tion and the pre­dic­tion “you will give Omega $5” is im­pos­si­ble. This is in­valid, how­ever, as long as you can think of at least one sce­nario where you have a good rea­son to give Omega $5. Omega would show up in that sce­nario and ask for $5.

If this sce­nario in­cludes a long ar­gu­ment about why you should give it $5, so be it. If it means Omega gives you $10 in re­turn, so be it. But it doesn’t mat­ter for the sake of the ques­tion. It mat­ters for the an­swer, but the ques­tion doesn’t need to in­clude these de­tails be­cause the un­der­ly­ing prob­lem is still the same. Omega made a pre­dic­tion and now you need to act. All of the ex­cuses and whin­ing and ar­gu­ing will even­tu­ally end with you hand­ing Omega $5. Omega’s pre­dic­tion will have in­cluded all of this bick­er­ing.

This ques­tion is es­sen­tially the same as say­ing, “If you have a good rea­son to give Omega $5 then you will give Omega $5.” It should be a com­pletely un­in­ter­est­ing, ob­vi­ous ques­tion. It holds some im­pli­ca­tions on other sce­nar­ios in­volv­ing Omega but those are for an­other time. Those im­pli­ca­tions should have no bear­ing on the an­swer to this ques­tion.