This Omega can easily prove that the world in which it asks you to pay is logically inconsistent, and then it concludes that in that world you do agree to pay (because a falsity implies every statement, and this one happened to come first lexicographically or something).
This seems to be confusing “counterfactual::if” with “logical::if”. Noting that a world is impossible because the agents will not make the decisions that lead to that world does not mean that you can just make stuff up about that world since “anything is true about a world that doesn’t exist”.
Your objection would be valid if we had a formalized concept of “counterfactual if” distinct from “logical if”, but we don’t. When looking at the behavior of deterministic programs, I have no idea how to make counterfactual statements that aren’t logical statements.
When a program takes explicit input, you can look at what the program does if you pass this or that input, even if some inputs will in fact never be passed.
Noting that a world is impossible because the agents will not make the decisions that lead to that world does not mean that you can just make stuff up about that world since “anything is true about a world that doesn’t exist”.
If event S is empty, then for any Q you make up, it’s true that [for all s in S, Q]. This statement also holds if S was defined to be empty if [Not Q], or if Q follows from S being non-empty.
Yes you can make logical deductions of that form, but my point was that you can’t feed those conlusions back into the decision making process without invalidating the assumptions that went into those conclusions.
This seems to be confusing “counterfactual::if” with “logical::if”. Noting that a world is impossible because the agents will not make the decisions that lead to that world does not mean that you can just make stuff up about that world since “anything is true about a world that doesn’t exist”.
Your objection would be valid if we had a formalized concept of “counterfactual if” distinct from “logical if”, but we don’t. When looking at the behavior of deterministic programs, I have no idea how to make counterfactual statements that aren’t logical statements.
When a program takes explicit input, you can look at what the program does if you pass this or that input, even if some inputs will in fact never be passed.
If event S is empty, then for any Q you make up, it’s true that [for all s in S, Q]. This statement also holds if S was defined to be empty if [Not Q], or if Q follows from S being non-empty.
Yes you can make logical deductions of that form, but my point was that you can’t feed those conlusions back into the decision making process without invalidating the assumptions that went into those conclusions.