It’s an assumption of the thought experiment that the player justifiably learns about the situation after the coin is tossed, that they are dealing with Omega and not “anti-Omega” and somehow learn that to be the case.
In that case, it doesn’t seem like there’s any point in time where a decision to cooperate should have a positive expected utility.
Correctness of decisions doesn’t depend on current time or current knowledge.
It’s an assumption of the thought experiment that the player justifiably learns about the situation after the coin is tossed, that they are dealing with Omega and not “anti-Omega” and somehow learn that to be the case.
In that case, it doesn’t seem like there’s any point in time where a decision to cooperate should have a positive expected utility.
Correctness of decisions doesn’t depend on current time or current knowledge.