[Question] Is Agent Simulates Predictor a “fair” problem?

It’s a sim­ple ques­tion, but I think it might help if I add in con­text. In the pa­per in­tro­duc­ing Func­tional De­ci­sion The­ory, it is noted that it is im­pos­si­ble to de­sign an al­gorithm that can perform well on all de­ci­sion prob­lems since some of them can be speci­fied to be blatantly un­fair, ie. pun­ish ev­ery agent that isn’t an alpha­bet­i­cal de­ci­sion the­o­rist.

The ques­tion then arises, how do we define which prob­lems are or are not fair? We start by not­ing that some peo­ple con­sider New­comb’s-like prob­lems to be un­fair since your out­come de­pends on a pre­dic­tor’s pre­dic­tion, which is rooted in an anal­y­sis of your al­gorithm. So what makes this case any differ­ent from only re­ward­ing the alpha­bet­i­cal de­ci­sion the­o­rist?

The pa­per an­swers that the pre­dic­tion only de­pends on the de­ci­sion you end up mak­ing and that any other in­ter­nal de­tails are ig­nored. So it only cares about your de­ci­sion and not how you come to it, the prob­lem seems fair. I’m in­clined to agree with this rea­son­ing, but a similar line of rea­son­ing doesn’t seem to hold with Agent Si­mu­lates Pre­dic­tor. Here the al­gorithm you use is rele­vant as the pre­dic­tor can only pre­dict the agent if it’s al­gorithm is less than a cer­tain level of com­plex­ity, oth­er­wise it may make a mis­take.

Please note that this ques­tion isn’t about whether this prob­lem is worth con­sid­er­ing; life is of­ten un­fair and we have to deal with it the best that we can. The ques­tion is about whether the prob­lem is “fair”, where I roughly un­der­stand “fair” mean­ing that this is in a cer­tain class of prob­lems that I can’t spec­ify at this mo­ment (I sus­pect it would re­quire its own seper­ate post) where we should be able to achieve the op­ti­mal re­sult in each prob­lem.