The reason it doesn’t solve the problem is because the people who want to donate to charity aren’t doing it so that the other people also participating in the game will get utility—that is, they’re altrusits, but not average utilitarians towards the other players. So the formulation is a little more complicated.
They’re selfless, and have coordinated decisions with precommitments—ADT will then recreate the UDT formulation, since there are no anthropic issues to worry about. ADT + selflessness tends to SIA-like behaviour in the Sleeping beauty problem, which isn’t the same as saying ADT says selfless agents should follow SIA.
Well, yes, it recreates the UDT solution (or at least it does if it works correctly—I didn’t actually check or anything). But the problem was never about just recreating the UDT solution—it’s about understanding why the non-UDT solution doesn’t work.
Ah, good point. I made a mistake in translating the problem into selfish terms. In fact, that might actually solve the non-anthropic problem...
EDIT: Nope.
Why nope? ADT (with precommitements) simplifies to a version of UDT in non-anthropic situations.
The reason it doesn’t solve the problem is because the people who want to donate to charity aren’t doing it so that the other people also participating in the game will get utility—that is, they’re altrusits, but not average utilitarians towards the other players. So the formulation is a little more complicated.
They’re selfless, and have coordinated decisions with precommitments—ADT will then recreate the UDT formulation, since there are no anthropic issues to worry about. ADT + selflessness tends to SIA-like behaviour in the Sleeping beauty problem, which isn’t the same as saying ADT says selfless agents should follow SIA.
Well, yes, it recreates the UDT solution (or at least it does if it works correctly—I didn’t actually check or anything). But the problem was never about just recreating the UDT solution—it’s about understanding why the non-UDT solution doesn’t work.
Because standard decision theory doesn’t know how to deal properly with identical agents and common policies?