Newcomb’s problem happened to me

Okay, maybe not me, but some­one I know, and that’s what the ti­tle would be if he wrote it. New­comb’s prob­lem and Kavka’s toxin puz­zle are more than just cu­ri­osi­ties rele­vant to ar­tifi­cial in­tel­li­gence the­ory. Like a lot of thought ex­per­i­ments, they ap­prox­i­mately hap­pen. They illus­trate ro­bust is­sues with causal de­ci­sion the­ory that can deeply af­fect our ev­ery­day lives.

Yet some­how it isn’t main­stream knowl­edge that these are more than merely ab­stract lin­guis­tic is­sues, as ev­i­denced by this com­ment thread (please no Karma sniping of the com­ments, they are a valuable record). Sce­nar­ios in­volv­ing brain scan­ning, de­ci­sion simu­la­tion, etc., can es­tab­lish their val­idy and fu­ture rele­vance, but not that they are already com­mon­place. For the record, I want to provide an already-hap­pened, real-life ac­count that cap­tures the New­comb essence and ex­plic­itly de­scribes how.

So let’s say my friend is named Joe. In his ac­count, Joe is very much in love with this girl named Omega… er… Kate, and he wants to get mar­ried. Kate is some­what tra­di­tional, and won’t marry him un­less he pro­poses, not only in the sense of ex­plic­itly ask­ing her, but also ex­press­ing cer­tainty that he will never try to leave her if they do marry.

Now, I don’t want to make up the end­ing here. I want to con­vey the ac­tual ac­count, in which Joe’s be­liefs are roughly schema­tized as fol­lows:

  1. if he pro­poses sincerely, she is effec­tively sure to be­lieve it.

  2. if he pro­poses in­sincerely, she will 50% likely be­lieve it.

  3. if she be­lieves his pro­posal, she will 80% likely say yes.

  4. if she doesn’t be­lieve his pro­posal, she will surely say no, but will not be sig­nifi­cantly up­set in com­par­i­son to the sig­nifi­cance of mar­riage.

  5. if they marry, Joe will 90% likely be happy, and will 10% likely be un­happy.

He roughly val­ues the happy and un­happy out­comes op­po­sitely:

  1. be­ing hap­pily mar­ried to Kate: 125 megau­tilons

  2. be­ing un­hapily mar­ried to Kate: −125 megau­tilons.

So what should he do? What should this real per­son have ac­tu­ally done?1 Well, as in New­comb, these be­liefs and util­ities pre­sent an in­ter­est­ing and quan­tifi­able prob­lem…

  • Ex­pect­edValue(mar­riage) = 90%·125 − 10%·125 = 100,

  • Ex­pect­edValue(sincere pro­posal) = 80%·100 = 80,

  • Ex­pect­edValue(in­sincere pro­posal) = 50%·80%·100 = 40.

No sur­prise here, sincere pro­posal comes out on top. That’s the im­por­tant thing, not the par­tic­u­lar num­bers. In fact, in real life Joe’s util­ity func­tion as­signed nega­tive moral value to in­sincer­ity, broad­en­ing the gap. But no mat­ter; this did not make him sincere. The prob­lem is that Joe was a clas­si­cal causal de­ci­sion the­o­rist, and he be­lieved that if cir­cum­stances changed to ren­der him un­hap­pily mar­ried, he would nec­es­sar­ily try to leave her. Be­cause of this pos­si­bil­ity, he could not pro­pose sincerely in the sense she de­sired. He could even ap­pease him­self by spec­u­lat­ing causes2 for how Kate can de­tect his un­cer­tainty and con­strain his op­tions, but that still wouldn’t make him sincere.

See­ing ex­pected value com­pu­ta­tions with ad­justable prob­a­bil­ities for the prob­lem can re­ally help feel its ro­bust­ness. It’s not about to dis­ap­pear. Cer­tain­ties can be re­placed with 95%’s and it all still works the same. It’s a whole parametrized fam­ily of prob­lems, not just one.

Joe’s sce­nario feels strik­ingly similar to New­comb’s prob­lem, and in fact it is: if we change some prob­a­bil­ities to 0 and 1, it’s es­sen­tially iso­mor­phic:

  1. If he pro­poses sincerely, she will say yes.

  2. If he pro­poses in­sincerely, she will say no and break up with him for­ever.

  3. If they marry, he is 90% likely to be happy, and 10% likely to be un­happy.

The analogue of the two boxes are mar­riage (opaque) and the op­tion of leav­ing (trans­par­ent). Given mar­riage, the op­tion of leav­ing has a small marginal util­ity of 10%·125 = 12.5 utilons. So “clearly” he should “just take both”? The prob­lem is that he can’t just take both. The pro­posed pay­out ma­trix would be:

Joe \ Kate
Say yes
Say no
Pro­pose sincerely
Mar­riage Noth­ing sig­nifi­cant
Pro­pose in­sincerely
Mar­riage + op­tion to leave Noth­ing sig­nifi­cant

The “prin­ci­pal of (weak3) dom­i­nance” would say the sec­ond row is the bet­ter “op­tion”, and that there­fore “clearly” Joe should pro­pose in­sincerely. But in New­comb some of the out­comes are de­clared log­i­cally im­pos­si­ble. If he tries to take both boxes, there will be noth­ing in the mar­riage box. The analogue in real life is sim­ply that the four out­comes need not be equally likely.

So there you have it. New­comb hap­pens. New­comb hap­pened. You might be won­der­ing, what did the real Joe do?

In real life, Joe ac­tu­ally rec­og­nized the similar­ity to New­comb’s prob­lem, re­al­iz­ing for the first time that he must be­come up­date­less de­ci­sion agent, and not­ing his 90% cer­tainty, he self-mod­ified by adopt­ing a moral pre-com­mit­ment to never leav­ing Kate should they marry, pro­posed to her sincerely, and the rest is his­tory. No joke! That’s if Joe’s ac­count is ac­cu­rate, mind you.


Foot­notes:

1 This is not a so­cial com­men­tary, but an illus­tra­tion that prob­a­bil­is­tic New­comblike sce­nar­ios can and do ex­ist. Although this also does not hinge on whether you be­lieve Joe’s ac­count, I have pro­vided it as-is nonethe­less.

2 If you care about causal rea­son­ing, the other half of what’s sup­posed to make New­comb con­fus­ing, then Joe’s prob­lem is more like Kavka’s (so this post ac­ci­den­tally shows how Kavka and New­comb are similar). But the dis­tinc­tion is in­stru­men­tally ir­rele­vant: the point is that he can benefit from de­ci­sion mechanisms that are ev­i­den­tial and time-in­var­i­ant, and you don’t need “un­rea­son­able cer­tain­ties” or “para­doxes of causal­ity” for this to come up.

3 New­comb in­volves “strong” dom­i­nance, with the sec­ond row always strictly bet­ter, but that’s not es­sen­tial to this post. In any case, I could ex­hibit strong dom­i­nance by re­mov­ing “if they do get mar­ried” from Kate’s pro­posal re­quire­ment, but I de­cided against it, fa­vor­ing in­stead the ac­tual ac­count of events.