Counterfactual self-defense

Let’s imagine these following dialogues between Omega and an agent implementing TDT. Usual standard assumptions on Omega applies: the agent knows Omega is real, trustworthy and reliable, and Omega knows that the agent knows that, and the agent knows that Omega knows that the agent knows, etc. (that is, Omega’s trustworthiness is common knowledge, à la Aumann).

Dialogue 1.

Omega: “Would you accept a bet where I pay you 1000$ if a fair coin flip comes out tail and you pay me 100$ if it comes out head?”
TDT: “Sure I would.”
Omega: “I flipped the coin. It came out head.”
TDT: “Doh! Here’s your 100$.”

I hope there’s no controversy here.

Dialogue 2.

Omega: “I flipped a fair coin and it came out head.”
TDT: “Yes...?”
Omega: “Would you accept a bet where I pay you 1000$ if the coin flip came out tail and you pay me 100$ if it came out head?”
TDT: “No way!”

I also hope no controversy arises: if the agent would answer yes, then there’s no reason he wouldn’t accept all kinds of losing bets conditioned on information it already knows.

The two bets are equal, but the information is presented in different order: in the second dialogue, the agent has the time to change its knowledge about the world and should not accept bets that it already knows are losing.

But then...

Dialogue 3.

Omega: “I flipped a coin and it came out head. I offer you a bet where I pay you 1000$ if the coin flip comes out tail, but only if you agree to pay me 100$ if the coin flip comes out head.”
TDT: ”...?”

In the original counterfactual discussion, apparently the answer of the TDT implementing agent should have been yes, but I’m not entirely clear on what is the difference between the second and the third case.

Thinking about it, it seems that the case is muddled because the outcome and the bet are presented at the same time. On one hand, it appears correct to think that an agent should act exactly how it should if it had pre-committed, but on the other hand, an agent should not ignore any information is presented (it’s a basic requirement of treating probability as extended logic).

So here’s a principle I would like to call ‘counterfactual self-defense’: whenever informations and bets are presented to the agent at the same time, it always first conditions its priors and only then examines whatever bets has been offered. This should prevent Omega from offering counterfactual losing bets, but not counterfactual winning ones.

Would this principle make an agent win more?