In the selfish case, you forgot the 0.5: the payoff is 455 for “always yea”, 350 for “always nay”.
And you seem to be comparing selfish with selfless, not with average utilitarian.
For an average utilitarian, under “always yea”, 100 is given out once in the heads world, and 1000 is given out 9 times in the tails world. These must be shared among 10 people, so the average is 0.5(100+1000x9)/10=455. For “always nay”, 700 is give out once in the heads world, and 9 times in the tails world, giving 0.5(700 + 9x700)/10=350, same as for the selfish agent.
The reason it doesn’t solve the problem is because the people who want to donate to charity aren’t doing it so that the other people also participating in the game will get utility—that is, they’re altrusits, but not average utilitarians towards the other players. So the formulation is a little more complicated.
They’re selfless, and have coordinated decisions with precommitments—ADT will then recreate the UDT formulation, since there are no anthropic issues to worry about. ADT + selflessness tends to SIA-like behaviour in the Sleeping beauty problem, which isn’t the same as saying ADT says selfless agents should follow SIA.
Well, yes, it recreates the UDT solution (or at least it does if it works correctly—I didn’t actually check or anything). But the problem was never about just recreating the UDT solution—it’s about understanding why the non-UDT solution doesn’t work.
In the selfish case, you forgot the 0.5: the payoff is 455 for “always yea”, 350 for “always nay”.
And you seem to be comparing selfish with selfless, not with average utilitarian.
For an average utilitarian, under “always yea”, 100 is given out once in the heads world, and 1000 is given out 9 times in the tails world. These must be shared among 10 people, so the average is 0.5(100+1000x9)/10=455. For “always nay”, 700 is give out once in the heads world, and 9 times in the tails world, giving 0.5(700 + 9x700)/10=350, same as for the selfish agent.
Ah, good point. I made a mistake in translating the problem into selfish terms. In fact, that might actually solve the non-anthropic problem...
EDIT: Nope.
Why nope? ADT (with precommitements) simplifies to a version of UDT in non-anthropic situations.
The reason it doesn’t solve the problem is because the people who want to donate to charity aren’t doing it so that the other people also participating in the game will get utility—that is, they’re altrusits, but not average utilitarians towards the other players. So the formulation is a little more complicated.
They’re selfless, and have coordinated decisions with precommitments—ADT will then recreate the UDT formulation, since there are no anthropic issues to worry about. ADT + selflessness tends to SIA-like behaviour in the Sleeping beauty problem, which isn’t the same as saying ADT says selfless agents should follow SIA.
Well, yes, it recreates the UDT solution (or at least it does if it works correctly—I didn’t actually check or anything). But the problem was never about just recreating the UDT solution—it’s about understanding why the non-UDT solution doesn’t work.
Because standard decision theory doesn’t know how to deal properly with identical agents and common policies?