Counterfactuals as a matter of Social Convention

In my last post, I wrote that the coun­ter­fac­tu­als in Trans­par­ent-Box New­comb’s prob­lem were largely a mat­ter of so­cial con­ven­tion. One point I over­looked for a long time was that for­mal­is­ing a prob­lem like New­comb’s is tricker than it seems. Depend­ing on how it is writ­ten, some state­ments may seem to ap­ply to just our ac­tual world, some may seem to be also refer­ring to coun­ter­fac­tual wor­lds and some may seem am­bigu­ous.

To clar­ify this, I’ll con­sider phrases that one might hear in re­la­tion to this prob­lem + some vari­a­tions and draw out their im­pli­ca­tions. I won’t use modal logic since it re­ally wouldn’t add any­thing to this dis­cus­sion ex­cept more jar­gon.

The idea that coun­ter­fac­tu­als could have a so­cial el­e­ment should seem re­ally puz­zling at first. After all, coun­ter­fac­tu­als de­ter­mine what counts as a good de­ci­sion and surely what is a good de­ci­sion isn’t just a mat­ter of so­cial con­ven­tion? I think I know how to re­solve this prob­lem and I’ll ad­dress that in a post soon, but for now I’ll just provide a hint and link you to a com­ment by Abram Dem­ski talk­ing about how prob­a­bil­ities are some­where be­tween sub­jec­tive and ob­jec­tive.

Ex­am­ple 1:

a) Omega is a perfect predictor

b) You find out from an in­fal­lible source that Omega will pre­dict your choice correctly

The first sug­gests that Omega will pre­dict you cor­rectly no mat­ter what you choose, so we might take it to ap­ply to ev­ery coun­ter­fac­tual world, while it is tech­ni­cally pos­si­ble that Omega might only be a perfect pre­dic­tor in this world. The sec­ond is much more am­bigu­ous and you might take its pre­dic­tion to only be cor­rect in this world and not the coun­ter­fac­tual.

Ex­am­ple 2:

a) The first box always con­tains $1000

b) The first box con­tains $1000

First seems to be mak­ing a claim about coun­ter­fac­tual wor­lds again, while the sec­ond is am­bigu­ous. It isn’t clear if it ap­plies to all wor­lds or not.

Ex­am­ple 3:

“The game works as fol­lows: the first box con­tains $1000, while the sec­ond con­tains $0 or $1000 de­pend­ing on whether the pre­dic­tor pre­dicts you’ll two-box or one-box”

Talk­ing about the rules of the game seems to be a hint that this will ap­ply to all coun­ter­fac­tu­als. After all, de­ci­sion prob­lems are nor­mally about win­ning within a game, as op­posed to the rules chang­ing ac­cord­ing to your de­ci­sion.

Ex­am­ple 4:

a) The box in front of you con­tains $1 million

b) The box in front of you con­tains ei­ther $0 or $1 mil­lion. In this case, it con­tains $1 million

The first is am­bigu­ous. The sec­ond seems to make a state­ment about all coun­ter­fac­tu­als, then one about this world. If it were mak­ing a state­ment just about this world then the first sen­tence wouldn’t have been nec­es­sary.


This could be lev­er­aged to provide a cri­tique of the era­sure ap­proach. This ap­proach wants to con­struct a non-triv­ial de­ci­sion prob­lem by eras­ing in­for­ma­tion, but this anal­y­sis sug­gests that ei­ther a) this may be un­nec­es­sary be­cause it is already im­plicit in the prob­lem which in­for­ma­tion is uni­ver­sal or not or b) the is­sue isn’t that we need to figure out which as­sump­tion to erase, but that the prob­lem is am­bigu­ous about which parts should be taken uni­ver­sally.