Realizing the implication here has definitely made me more skeptical of the moral parliament idea, but if it’s an argument against the moral parliament, then it’s also a potential argument against other ideas for handling moral uncertainty. The problem is that trading is closely related to Pareto optimality. If you don’t allow trading between your moral theories, then you likely end up in situations where each of your moral theories says that option A is better or at least no worse than option B, but you choose option B anyway. But if you do allow trading, then you end up with the kind of conclusion described in my post.
Another way out of this may be to say that there is no such thing as how one “should” handle moral uncertainty, that the question simply doesn’t have an answer, that it would be like asking “how should I make decisions if I can’t understand basic decision theory?”. It’s actually hard to think of a way to define “should” such that the question does have an answer. For example, suppose we define “should” as what an ideal version of you would tell you to do, then presumably they would have resolved their moral uncertainty already and tell you what the correct morality is (or what your actual values are, whichever makes more sense), and tell you to follow that.
it would be like asking “how should I make decisions if I can’t understand basic decision theory?”
But that seems to have an answer, specifically along the lines of “follow those heuristics recommended by those who are on your side and do understand decision theory.”
Realizing the implication here has definitely made me more skeptical of the moral parliament idea, but if it’s an argument against the moral parliament, then it’s also a potential argument against other ideas for handling moral uncertainty. The problem is that trading is closely related to Pareto optimality. If you don’t allow trading between your moral theories, then you likely end up in situations where each of your moral theories says that option A is better or at least no worse than option B, but you choose option B anyway. But if you do allow trading, then you end up with the kind of conclusion described in my post.
Another way out of this may be to say that there is no such thing as how one “should” handle moral uncertainty, that the question simply doesn’t have an answer, that it would be like asking “how should I make decisions if I can’t understand basic decision theory?”. It’s actually hard to think of a way to define “should” such that the question does have an answer. For example, suppose we define “should” as what an ideal version of you would tell you to do, then presumably they would have resolved their moral uncertainty already and tell you what the correct morality is (or what your actual values are, whichever makes more sense), and tell you to follow that.
But that seems to have an answer, specifically along the lines of “follow those heuristics recommended by those who are on your side and do understand decision theory.”