Thinking more about this—if you’re taking power by pre-emptively publishing a commitment, why in tarnation are you satisfied with the same probability for 0.5 and 0.99 for yourself? What’s your reasoning for not saying something like “I’ll accept with a probability equal to the square of the share you offer me” or some other incentive to give you almost all of the split?
The particular curve you describe doesn’t work—even if someone gave in to your threat entirely, they’d offer you 2⁄3 of the 10$ (this maximizes their EV at ~1.5$), but then you’d have to reject a third of the time so you’d wind up with an EV of less than 5.
You could definitely fix that particular flaw in your system. And what you’d wind up with is something that gets analyzed a lot like the original game except that you’ve stolen first player position and are offering something ‘unfair’. So as usual for this game, your ‘unfair’ strategy would work perfectly against a pure-CDT agent (they’ll cave to any unfair setup since the confrontational alternative is getting 0), and work against some real humans while other real humans will say screw you. The ‘ideal agent’, however, does not reward threats (because being the kind of agent who never rewards threats is a pretty good shield against anyone bothering to threaten you in the first place, while being the kind of agent who does reward threats is asking to be threatened). So if you use a strategy like the one you suggest against them, they will compute an offer (or a probability of an offer) such that their EV is maximized subject to your EV being strictly less than 5$: in this way you would be better off if you’d just done the fair thing to begin with.
Thinking more about this—if you’re taking power by pre-emptively publishing a commitment, why in tarnation are you satisfied with the same probability for 0.5 and 0.99 for yourself? What’s your reasoning for not saying something like “I’ll accept with a probability equal to the square of the share you offer me” or some other incentive to give you almost all of the split?
The particular curve you describe doesn’t work—even if someone gave in to your threat entirely, they’d offer you 2⁄3 of the 10$ (this maximizes their EV at ~1.5$), but then you’d have to reject a third of the time so you’d wind up with an EV of less than 5.
You could definitely fix that particular flaw in your system. And what you’d wind up with is something that gets analyzed a lot like the original game except that you’ve stolen first player position and are offering something ‘unfair’. So as usual for this game, your ‘unfair’ strategy would work perfectly against a pure-CDT agent (they’ll cave to any unfair setup since the confrontational alternative is getting 0), and work against some real humans while other real humans will say screw you. The ‘ideal agent’, however, does not reward threats (because being the kind of agent who never rewards threats is a pretty good shield against anyone bothering to threaten you in the first place, while being the kind of agent who does reward threats is asking to be threatened). So if you use a strategy like the one you suggest against them, they will compute an offer (or a probability of an offer) such that their EV is maximized subject to your EV being strictly less than 5$: in this way you would be better off if you’d just done the fair thing to begin with.