They’re selfless, and have coordinated decisions with precommitments—ADT will then recreate the UDT formulation, since there are no anthropic issues to worry about. ADT + selflessness tends to SIA-like behaviour in the Sleeping beauty problem, which isn’t the same as saying ADT says selfless agents should follow SIA.
Well, yes, it recreates the UDT solution (or at least it does if it works correctly—I didn’t actually check or anything). But the problem was never about just recreating the UDT solution—it’s about understanding why the non-UDT solution doesn’t work.
They’re selfless, and have coordinated decisions with precommitments—ADT will then recreate the UDT formulation, since there are no anthropic issues to worry about. ADT + selflessness tends to SIA-like behaviour in the Sleeping beauty problem, which isn’t the same as saying ADT says selfless agents should follow SIA.
Well, yes, it recreates the UDT solution (or at least it does if it works correctly—I didn’t actually check or anything). But the problem was never about just recreating the UDT solution—it’s about understanding why the non-UDT solution doesn’t work.
Because standard decision theory doesn’t know how to deal properly with identical agents and common policies?