The intuitive answer is <$0.99, but section 3.3.3 says the answer should be <$0.50
? I don’t see this at all.
By section 3.3.3, I assume you mean the isomorphism between selfish and average-utilitarian? From an average utilitarian perspective (which is the same as a total utilitarian for fixed populations), buying that ticket for x after hearing “heads” will lose one person x in the tails world, and gain 99 people 1-x in the heads world. So the expected utility is (1/2)(1/100)(-x+99(1-x)), which is positive for x< 99⁄100.
ADT is supposed to reduce to a simplified version of UDT in non-anthropic situations; I didn’t emphasise this aspect, as I know you don’t want UDT published.
Section 3.3.3 says that a selfish agent should make the same decisions as an average-utilitarian who averages over just the set of people who may be “me”, right? That’s why it says that in the incubator experiment, a selfish agent who has been told she is in Room 1 should pay 1⁄2 for the ticket. An average-utilitarian who averages over everyone who exists in a world would pay 2⁄3 instead.
So in my example, consider an average-utilitarian whose attention is restricted to just people who have heard “heads”. Then buying a ticket loses an average of x in the tails world, and gains an average of 1-x in the heads world, so such an restricted-average-utilitarian would pay x<1/2.
(If this is still not making sense, please contact me on Google Chat where we can probably hash it out much more quickly.)
We’ll talk on google chat. But my preliminary thought is that if you are indeed restricting to those who have heard heads, then you need to make use of the fact that this objectively much more likely to happen in the heads world than in the tails.
? I don’t see this at all.
By section 3.3.3, I assume you mean the isomorphism between selfish and average-utilitarian? From an average utilitarian perspective (which is the same as a total utilitarian for fixed populations), buying that ticket for x after hearing “heads” will lose one person x in the tails world, and gain 99 people 1-x in the heads world. So the expected utility is (1/2)(1/100)(-x+99(1-x)), which is positive for x< 99⁄100.
ADT is supposed to reduce to a simplified version of UDT in non-anthropic situations; I didn’t emphasise this aspect, as I know you don’t want UDT published.
Section 3.3.3 says that a selfish agent should make the same decisions as an average-utilitarian who averages over just the set of people who may be “me”, right? That’s why it says that in the incubator experiment, a selfish agent who has been told she is in Room 1 should pay 1⁄2 for the ticket. An average-utilitarian who averages over everyone who exists in a world would pay 2⁄3 instead.
So in my example, consider an average-utilitarian whose attention is restricted to just people who have heard “heads”. Then buying a ticket loses an average of x in the tails world, and gains an average of 1-x in the heads world, so such an restricted-average-utilitarian would pay x<1/2.
(If this is still not making sense, please contact me on Google Chat where we can probably hash it out much more quickly.)
We’ll talk on google chat. But my preliminary thought is that if you are indeed restricting to those who have heard heads, then you need to make use of the fact that this objectively much more likely to happen in the heads world than in the tails.