But suppose I own a MacGuffin that you want (I value it at £9). If X={Reject any offer} and Y={You offer more than £10}, is this still blackmail? Formally, it looks the same.
One big difference I see is what counts as an “action”; arguably X={Publish the letters} “feels” more like an action than X={Reject any offer}. It seems that when what distinguishes a “blackmailish” offer from a “non-blackmailish” one is a discrete distinction (either you precommit to publish or you don’t), making the “blackmailish” offer is “bad”, but when there’s a continuum between the most and least blackmailish versions (as is the case with the MacGuffin negotiation), the offer doesn’t seem to be as much of a transgression.
I’m not totally happy with this tho, and to me the general problem of “negotiation in situations with no focal points” has no satisfying solution beyond racing to who precommits first.
I’m not totally happy with this tho, and to me the general problem of “negotiation in situations with no focal points” has no satisfying solution beyond racing to who precommits first.
That trick works against CDT agents, anyway. Against agents that implement a modern decision theory it leaves the negotiation problem approximately where it started. (This means that if you put ideal CDT agents and ideal TDT agents in a room with devices that offer similarly ideal pre-commitments and give them exchanges to make the CDT ends up accepting the worst possible positive trade.)
One big difference I see is what counts as an “action”; arguably X={Publish the letters} “feels” more like an action than X={Reject any offer}. It seems that when what distinguishes a “blackmailish” offer from a “non-blackmailish” one is a discrete distinction (either you precommit to publish or you don’t), making the “blackmailish” offer is “bad”, but when there’s a continuum between the most and least blackmailish versions (as is the case with the MacGuffin negotiation), the offer doesn’t seem to be as much of a transgression.
I’m not totally happy with this tho, and to me the general problem of “negotiation in situations with no focal points” has no satisfying solution beyond racing to who precommits first.
That trick works against CDT agents, anyway. Against agents that implement a modern decision theory it leaves the negotiation problem approximately where it started. (This means that if you put ideal CDT agents and ideal TDT agents in a room with devices that offer similarly ideal pre-commitments and give them exchanges to make the CDT ends up accepting the worst possible positive trade.)