Omega: “Would you accept a bet where I pay you 1000$ if a fair coin flip comes out tail and you pay me 100$ if it comes out head?” TDT: “Sure I would.” Omega: “I flipped the coin. It came out head.” TDT: “Doh! Here’s your 100$.”
As a sidenote, I note that a CDT agent would be like the following:
Omega: “Would you accept a bet where I pay you 1000$ if a fair coin flip comes out tail and you pay me 100$ if it comes out head?” CDT: “Sure I would.” Omega: “I flipped the coin. It came out head. Now give me the 100$.” CDT: “No way!”
And knowing this, Omega would never bet with the CDT agent, unless they had a way to precommit to give the money even though they now know they have already lost, which brings them effectively close to being TDT agents… :-)
And knowing this, Omega would never bet with the CDT agent, unless they had a way to precommit to give the money even though they now know they have already lost, which brings them effectively close to being TDT agents… :-)
That’s begging the question on Omega motivations a little bit too much: if the world is a series of unrelated bets, then an agent that doesn’t pay does strictly better than an agent who pays when losing, so a good DT would want the agent to do that. But when trustworthiness (which is essentially the degree of timelessness) is an issue, for example in cooperation scenarios, or when Omega values that (Newcomb-like problems), or when it’s a precondition for receiving utility (Parfit’s hitchhiker), then TDT outperforms CDT, as it should.
As a sidenote, I note that a CDT agent would be like the following:
And knowing this, Omega would never bet with the CDT agent, unless they had a way to precommit to give the money even though they now know they have already lost, which brings them effectively close to being TDT agents… :-)
Only if they knew Omega wouldn’t retaliate.
That’s begging the question on Omega motivations a little bit too much: if the world is a series of unrelated bets, then an agent that doesn’t pay does strictly better than an agent who pays when losing, so a good DT would want the agent to do that. But when trustworthiness (which is essentially the degree of timelessness) is an issue, for example in cooperation scenarios, or when Omega values that (Newcomb-like problems), or when it’s a precondition for receiving utility (Parfit’s hitchhiker), then TDT outperforms CDT, as it should.