Btw, thinking about this sort of example also serves as a bit of an intuition pump (to me) against the philosophy where you have imprecise credences, use maximality to restrict your option set, and then pick on the basis of some other criterion. For example, let’s say that your other criterion would prefer “give no advice” > “saying ‘yeah it’s good’ ” > “give conditional advice”. It feels real weird to exclude “give no advice” because it’s dominated, and then instead move to “saying ‘yeah it’s good’”, which is still incomparable to “give no advice” and less preferred according to your decision criterion. It doesn’t feel like the kind of thing a rational agent would do. (I guess it violates independence of irrelevant alternatives, for one thing.)
I guess this isn’t unique to the dynamic rationality thing. You can construct much simpler examples where A dominates B, but they’re both incomparable to C, and you have some less-important decision-rule that prefers B > C > A. Probably you’ll already have thought a lot about these cases, so I don’t expect it to be convincing to you. Just reporting an intuition that pushes me away from imprecise credences + maximality.
Sorry this wasn’t clear: In the context of this post, when we endorsed “use maximality to restrict your option set, and then pick on the basis of some other criterion”, I think we were implicitly restricting to the special case where {permissible options w.r.t. the other criterion} ⊆ {permissible options w.r.t. consequentialism}. If that doesn’t hold, it’s not obvious to me what to do.
Regardless, it’s not clear to me what alternative you’d propose in this situation that’s less weird than choosing “saying ‘yeah it’s good’”. (In particular I’m not sure if you’re generally objecting to incomplete preferences per se, or to some way of choosing an option given incomplete preferences (w.r.t. consequentialism).)
In particular I’m not sure if you’re generally objecting to incomplete preferences per se, or to some way of choosing an option given incomplete preferences (w.r.t. consequentialism)
I was thinking at least a bit of both. I find the case for imprecise credences to be more compelling if they come with a decision-rule that seems reasonable to me.
Btw, thinking about this sort of example also serves as a bit of an intuition pump (to me) against the philosophy where you have imprecise credences, use maximality to restrict your option set, and then pick on the basis of some other criterion. For example, let’s say that your other criterion would prefer “give no advice” > “saying ‘yeah it’s good’ ” > “give conditional advice”. It feels real weird to exclude “give no advice” because it’s dominated, and then instead move to “saying ‘yeah it’s good’”, which is still incomparable to “give no advice” and less preferred according to your decision criterion. It doesn’t feel like the kind of thing a rational agent would do. (I guess it violates independence of irrelevant alternatives, for one thing.)
I guess this isn’t unique to the dynamic rationality thing. You can construct much simpler examples where A dominates B, but they’re both incomparable to C, and you have some less-important decision-rule that prefers B > C > A. Probably you’ll already have thought a lot about these cases, so I don’t expect it to be convincing to you. Just reporting an intuition that pushes me away from imprecise credences + maximality.
Sorry this wasn’t clear: In the context of this post, when we endorsed “use maximality to restrict your option set, and then pick on the basis of some other criterion”, I think we were implicitly restricting to the special case where {permissible options w.r.t. the other criterion} ⊆ {permissible options w.r.t. consequentialism}. If that doesn’t hold, it’s not obvious to me what to do.
Regardless, it’s not clear to me what alternative you’d propose in this situation that’s less weird than choosing “saying ‘yeah it’s good’”. (In particular I’m not sure if you’re generally objecting to incomplete preferences per se, or to some way of choosing an option given incomplete preferences (w.r.t. consequentialism).)
Ah, that’s a helpful clarification.
I was thinking at least a bit of both. I find the case for imprecise credences to be more compelling if they come with a decision-rule that seems reasonable to me.