Counterfactual mugging with a logical coin is a tricky problem. It might be easier to describe the problem with a “physical” coin first. We have two world programs, mutually quined:
The agent decides whether to pay the predictor 10 dollars. The predictor doesn’t decide anything.
The agent doesn’t decide anything. The predictor decides whether to pay the agent 100 dollars, depending on the agent’s decision in world 1.
By fiat, the agent cares about the two worlds equally, i.e. it maximizes the total sum of money it receives in both worlds. The usual UDT-ish solution can be crisply formulated in modal logic, PA or a bunch of other formalisms.
This makes sense. My main point is that the care about the two worlds equally part makes sense if it is part of the problem description, but otherwise we don’t know where that part comes from.
My logical example was supposed to illustrate that sometimes you should not care about them equally.
Counterfactual mugging with a logical coin is a tricky problem. It might be easier to describe the problem with a “physical” coin first. We have two world programs, mutually quined:
The agent decides whether to pay the predictor 10 dollars. The predictor doesn’t decide anything.
The agent doesn’t decide anything. The predictor decides whether to pay the agent 100 dollars, depending on the agent’s decision in world 1.
By fiat, the agent cares about the two worlds equally, i.e. it maximizes the total sum of money it receives in both worlds. The usual UDT-ish solution can be crisply formulated in modal logic, PA or a bunch of other formalisms.
Does that make sense?
This makes sense. My main point is that the care about the two worlds equally part makes sense if it is part of the problem description, but otherwise we don’t know where that part comes from.
My logical example was supposed to illustrate that sometimes you should not care about them equally.