The sophisticated chooser is also immune to money pumps
This seems false!
Let’s say that at the start of the tree, there’s a node that is not accessible to the sophisticated chooser because they’re not able to constrain themselves: e.g., they know for a fact they won’t pay in Parfit’s hitchhiker, and so they die in the desert; even though they would really want to pay later, they don’t have a way to.
Let’s say the payment is $100 and they value their life at $1m.
If you offer them a deal: they pay you $999 999 now so that when they get to the city and pay the $100, you give them $100.01 if they pay, they’ll happily agree: they gain $1.01 of value this way by not dying in the desert (even though if they were some other way, they could’ve had $999k+ more) because now by paying $100 when in the city, they gain $0.01.
Now, though, after this agreement, already with so much of their money, you can add that you will actually give that agent, for free, $0.02 if they do not pay when in the city.
The agent is very sad: they’ll now die in the desert, and even paying you $999 999 didn’t save them from that fate.
Oh well, happily, you have a solution: if the agent paus you $999 999, you will pay the agent $0.02 if they do pay when in the city.
(Repeat offering to pay $0.02 as an incentive to make the agent at the end prefer paying to not paying by $0.01 for $999 999, then say you‘ll also pay the agent $0.02 more if they do not pay, forever, until the agent is out of all of their utility.)
This is all downstream of the sophisticated chooser assigning extremely low probability to being offered that $0.02, despite the fact that it keeps happening.
You could “money pump” any agent if you were allowed to assume that they keep being wrong. A EUM standing by the sidelines here (with the same beliefs) would be incentivized to keep betting “this guy won’t be offered another $0.02″, and they’d also get “money-pumped” by constantly losing their bets.
What sophisticated choice avoids is the specific sequential-trade exploitation pattern where the agent pays to switch from A to B and then pays to switch back.
Well, no. I demonstrate not just leaving money on the table, as discussed by other commentators, but this kind of money-pump that extracts infinite utility out of sophisticated choice.
This seems false!
Let’s say that at the start of the tree, there’s a node that is not accessible to the sophisticated chooser because they’re not able to constrain themselves: e.g., they know for a fact they won’t pay in Parfit’s hitchhiker, and so they die in the desert; even though they would really want to pay later, they don’t have a way to.
Let’s say the payment is $100 and they value their life at $1m.
If you offer them a deal: they pay you $999 999 now so that when they get to the city and pay the $100, you give them $100.01 if they pay, they’ll happily agree: they gain $1.01 of value this way by not dying in the desert (even though if they were some other way, they could’ve had $999k+ more) because now by paying $100 when in the city, they gain $0.01.
Now, though, after this agreement, already with so much of their money, you can add that you will actually give that agent, for free, $0.02 if they do not pay when in the city.
The agent is very sad: they’ll now die in the desert, and even paying you $999 999 didn’t save them from that fate.
Oh well, happily, you have a solution: if the agent paus you $999 999, you will pay the agent $0.02 if they do pay when in the city.
(Repeat offering to pay $0.02 as an incentive to make the agent at the end prefer paying to not paying by $0.01 for $999 999, then say you‘ll also pay the agent $0.02 more if they do not pay, forever, until the agent is out of all of their utility.)
This is all downstream of the sophisticated chooser assigning extremely low probability to being offered that $0.02, despite the fact that it keeps happening.
You could “money pump” any agent if you were allowed to assume that they keep being wrong. A EUM standing by the sidelines here (with the same beliefs) would be incentivized to keep betting “this guy won’t be offered another $0.02″, and they’d also get “money-pumped” by constantly losing their bets.
Yeah, you’re right, thanks. I was only half-awake when I wrote the comment.
The agent will pay $999k, but only a very limited number of times (e.g., once), so it’s not that the good kind of money pump.
Yes, this was already discussed in the comments, and I added a footnote on that, please see footnote 3.
Well, no. I demonstrate not just leaving money on the table, as discussed by other commentators, but this kind of money-pump that extracts infinite utility out of sophisticated choice.
You’re right, I mislooked. This is a stronger result.
I’ll update the footnote.