The particular curve you describe doesn’t work—even if someone gave in to your threat entirely, they’d offer you 2⁄3 of the 10$ (this maximizes their EV at ~1.5$), but then you’d have to reject a third of the time so you’d wind up with an EV of less than 5.
You could definitely fix that particular flaw in your system. And what you’d wind up with is something that gets analyzed a lot like the original game except that you’ve stolen first player position and are offering something ‘unfair’. So as usual for this game, your ‘unfair’ strategy would work perfectly against a pure-CDT agent (they’ll cave to any unfair setup since the confrontational alternative is getting 0), and work against some real humans while other real humans will say screw you. The ‘ideal agent’, however, does not reward threats (because being the kind of agent who never rewards threats is a pretty good shield against anyone bothering to threaten you in the first place, while being the kind of agent who does reward threats is asking to be threatened). So if you use a strategy like the one you suggest against them, they will compute an offer (or a probability of an offer) such that their EV is maximized subject to your EV being strictly less than 5$: in this way you would be better off if you’d just done the fair thing to begin with.
The particular curve you describe doesn’t work—even if someone gave in to your threat entirely, they’d offer you 2⁄3 of the 10$ (this maximizes their EV at ~1.5$), but then you’d have to reject a third of the time so you’d wind up with an EV of less than 5.
You could definitely fix that particular flaw in your system. And what you’d wind up with is something that gets analyzed a lot like the original game except that you’ve stolen first player position and are offering something ‘unfair’. So as usual for this game, your ‘unfair’ strategy would work perfectly against a pure-CDT agent (they’ll cave to any unfair setup since the confrontational alternative is getting 0), and work against some real humans while other real humans will say screw you. The ‘ideal agent’, however, does not reward threats (because being the kind of agent who never rewards threats is a pretty good shield against anyone bothering to threaten you in the first place, while being the kind of agent who does reward threats is asking to be threatened). So if you use a strategy like the one you suggest against them, they will compute an offer (or a probability of an offer) such that their EV is maximized subject to your EV being strictly less than 5$: in this way you would be better off if you’d just done the fair thing to begin with.