Formal vs. Effective Pre-Commitment

New­comb’s Prob­lem is effec­tively a prob­lem about pre-com­mit­ment. Every­one agrees that if you have the op­por­tu­nity to pre-com­mit in ad­vance of Omega pre­dict­ing you, then you ought to. The only ques­tion is what you ought to do if you ei­ther failed to do this or weren’t given the op­por­tu­nity to do this. LW-Style De­ci­sion The­o­ries like TDT or UDT say that you should act as though you are pre-com­mit­ted, while Ca­sual De­ci­sion The­ory says that it’s too late.

(I’ve writ­ten about New­comb’s Prob­lem and pre-com­mit­ment be­fore, but I don’t feel that I quite did the topic jus­tice. I’m also hugely in favour of short, defini­tive ar­ti­cles on spe­cific points that can serve as a quick refer­ence. So here we go:)

For­mal pre-com­mit­ments in­clude things like rewrit­ing your code, sign­ing a legally bind­ing con­tract or pro­vid­ing as­sets as se­cu­rity. If set up cor­rectly, they en­sure that a ra­tio­nal agent ac­tu­ally keeps their end of the bar­gain. Of course, an ir­ra­tional agent may still break their end of the bar­gain.

Effec­tive pre-com­mit­ment de­scribes any situ­a­tion where an agent must (in the log­i­cal sense) nec­es­sar­ily perform an ac­tion in the fu­ture, even if there is no for­mal pre-com­mit­ment. If liber­tar­ian freewill were to ex­ist, then no-one would ever be effec­tively pre-com­mit­ted, but if the uni­verse is de­ter­minis­tic, then we are effec­tively pre-com­mit­ted to any choice that we make (quan­tum me­chan­ics effec­tively pre-com­mits us to par­tic­u­lar prob­a­bil­ity dis­tri­bu­tions, rather than in­di­vi­d­ual choices, but for pur­poses of sim­plic­ity we will ig­nore this here and just as­sume straight­for­ward de­ter­minism). This fol­lows straight from the defi­ni­tion of de­ter­minism (more dis­cus­sion about the philo­soph­i­cal con­se­quences of de­ter­minism in a pre­vi­ous post).

One rea­son why this con­cept seems so weird is that there’s ab­solutely no need for an agent that’s effec­tively pre-com­mit­ted to know that it is pre-com­mit­ted un­til the ex­act mo­ment when it locks in its de­ci­sion. From the agent’s per­spec­tive, it mag­i­cally turns out to be pre-com­mit­ted to what­ever ac­tion it chooses, how­ever, the truth is that the agent was always pre-com­mit­ted to this ac­tion, just with­out know­ing.

Much of the con­fu­sion about pre-com­mit­ment is about whether we should be look­ing at for­mal or effec­tive pre-com­mit­ment. Perfect pre­dic­tors only care about effec­tive pre-com­mit­ment; for them for­mal­ities are un­nec­es­sary and pos­si­bly mis­lead­ing. How­ever, hu­man level agents tend to care much more about for­mal pre-com­mit­ments. Some peo­ple, like de­tec­tives or poker play­ers, may be re­ally good at read­ing peo­ple, but they’re still noth­ing com­pared to a perfect pre­dic­tor and most peo­ple aren’t even this good. So in ev­ery­day life, we tend to care much more about for­mal pre-com­mit­ments when we want cer­tainty.

How­ever, New­comb’s Prob­lem ex­plic­itly speci­fies a perfect pre­dic­tor, so we shouldn’t be think­ing about hu­man level pre­dic­tors. In fact, I’d say that some of the em­pha­sise on for­mal pre-com­mit­ment comes from an­thro­po­mor­phis­ing perfect pre­dic­tors. It’s re­ally hard for us to ac­cept that any­one or any­thing could ac­tu­ally be that good and that there’s no way to get ahead of it.

In clos­ing, differ­en­ti­at­ing the two kinds of pre-com­mit­ment re­ally clar­ifies these kinds of dis­cus­sions. We may not be able to go back into the past and pre-com­mit to a cer­tain cause of ac­tion, but we can take an ac­tion on the ba­sis that it would be good if we had pre-com­mit­ted to it and be as­sured that we will dis­cover that we were ac­tu­ally pre-com­mit­ted to it.

(Happy to re­name these con­cepts if any­one has bet­ter names)

No nominations.
No reviews.