I think an epsilon of paranoia is useful to regularise these sorts of analyses. Namely, one supposes that there is an adversary out there who is actively trying to lower your expected utility through disinformation (in order to goad you into making poor decisions), but is only able to affect all your available information by an epsilon. One should then adjust one’s computations of expected utility accordingly. In particular, the contribution of any event that you expect to occur with probability less than epsilon should probably be discarded completely.
The trouble is that you can split a big event into P / epsilon chance-of-epsilon events, or average P / epsilon chance-of-epsilon events into a big event. In order to avoid inconsistency, you have to actually say what information you think should be treated as an average of nearby information.
I like this point from Terry Tao:
The trouble is that you can split a big event into P / epsilon chance-of-epsilon events, or average P / epsilon chance-of-epsilon events into a big event. In order to avoid inconsistency, you have to actually say what information you think should be treated as an average of nearby information.